You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2021/09/02 18:49:48 UTC

Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2377

See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2377/display/redirect?page=changes>

Changes:

[Steve Niemitz] [BEAM-12767] Add another unit test for PipelineOptions deserialization


------------------------------------------
[...truncated 350.74 KB...]
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:72)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 02, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 02, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..3=[{inputs}], proj#0..3=[{exprs}])
      BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 02, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (`type` = 'story' OR `type` = 'job') AND `score` > 2
    Sep 02, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 02, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 02, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <115841 bytes, hash 7067f75afeddc8805cd16f3a75349968e4223819644f6776a68c2bb6c0f5d718> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cGf3Wv7dyIBc0W86dTSZaOQiOBlkT2d2powrtsD11xg.pb
    Sep 02, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 02, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-ytej5qC_OvUfqwQf0G2BZteGa2RtQ9zU6RVC-b8HFUs.jar
    Sep 02, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4747870347571417666.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-eYw9YTW-ipkH6dFR6fXA5q5lpRpHCxEfYh-nNWE34rU.jar
    Sep 02, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 02, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_229/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 02, 2021 6:45:19 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 02, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_229/ParDo(RowMonitor) as step s2
    Sep 02, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamCalcRel_285/ParDo(Calc) as step s3
    Sep 02, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s4
    Sep 02, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 02, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-02_11_45_20-14835984397004002849?project=apache-beam-testing
    Sep 02, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-02_11_45_20-14835984397004002849
    Sep 02, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-02_11_45_20-14835984397004002849
    Sep 02, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-02T18:45:23.585Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 02, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:32.068Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:32.809Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:32.850Z: Expanding GroupByKey operations into optimizable parts.
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:32.889Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:32.980Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:33.003Z: Fusing consumer BeamPushDownIOSourceRel_229/ParDo(RowMonitor) into BeamPushDownIOSourceRel_229/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:33.039Z: Fusing consumer BeamCalcRel_285/ParDo(Calc) into BeamPushDownIOSourceRel_229/ParDo(RowMonitor)
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:33.065Z: Fusing consumer ParDo(TimeMonitor) into BeamCalcRel_285/ParDo(Calc)
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:33.431Z: Executing operation BeamPushDownIOSourceRel_229/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_229/ParDo(RowMonitor)+BeamCalcRel_285/ParDo(Calc)+ParDo(TimeMonitor)
    Sep 02, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:33.505Z: Starting 5 workers in us-central1-c...
    Sep 02, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:45:50.464Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:46:09.600Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:46:09.655Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 02, 2021 6:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:46:19.981Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 02, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:46:44.142Z: Workers have started successfully.
    Sep 02, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:46:44.168Z: Workers have started successfully.
    Sep 02, 2021 6:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:47:15.976Z: Finished operation BeamPushDownIOSourceRel_229/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_229/ParDo(RowMonitor)+BeamCalcRel_285/ParDo(Calc)+ParDo(TimeMonitor)
    Sep 02, 2021 6:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:47:16.110Z: Cleaning up.
    Sep 02, 2021 6:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:47:16.181Z: Stopping worker pool...
    Sep 02, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:49:40.228Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 02, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-02T18:49:40.275Z: Worker pool stopped.
    Sep 02, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-02_11_45_20-14835984397004002849 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5debfae6-ca91-4476-99e2-a81f6e9904fa and timestamp: 2021-09-02T18:49:45.802000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.294

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 02, 2021 6:49:46 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.319 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/r2gn2oftdlaiq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2627

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2627/display/redirect>

Changes:


------------------------------------------
[...truncated 341.79 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7918487ff3e2167c7fd65eb5bc00d48d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 6:45:07 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1334773707]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1596936525]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4455652386111425539.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5FRRxV0QA_YYLcSBPRnkSGheyJBcTfb-kQr-7EejQAk.jar
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 04, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 74f9bc57c48b1a71db9a27c78fb0eef2fc93a893f8b4f3f7659257f7a6a50f69> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dPm8V8SLGnHbmifHj7Du8vyTqJP4tPP3ZZJX96alD2k.pb
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_23_45_21-7813787463402869550?project=apache-beam-testing
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_23_45_21-7813787463402869550
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_23_45_21-7813787463402869550
    Nov 04, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T06:45:28.416Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:37.405Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.009Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.054Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.086Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.148Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.179Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.212Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.544Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.645Z: Starting 5 workers in us-central1-a...
    Nov 04, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:06.017Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:28.366Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:54.525Z: Workers have started successfully.
    Nov 04, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:54.550Z: Workers have started successfully.
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:23.836Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:23.971Z: Cleaning up.
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:24.048Z: Stopping worker pool...
    Nov 04, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:49:46.638Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:49:46.683Z: Worker pool stopped.
    Nov 04, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_23_45_21-7813787463402869550 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea3bdc43-72de-42cd-affc-7a235d177c5d and timestamp: 2021-11-04T06:49:53.546000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.355

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 50.528 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/hzsd35ktbby34

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2626

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2626/display/redirect?page=changes>

Changes:

[noreply] Add missing parentheses for Python test example

[noreply] Update test input and imports

[25622840+adude3141] [BEAM-13157] support hadoop configuration on ParquetIO.Parse

[Pablo Estrada] JdbcIO has a single WriteFn underlying all implementations

[Luke Cwik] [BEAM-13015] Use a network based channel instead of an inmemory one

[Pablo Estrada] Addressing comments

[noreply] Merge pull request #15839 from [BEAM-13041][Playground] Prepare files

[noreply] [BEAM-13001] updated CHANGES.md to include msec counter for Go (#15872)

[noreply] Add window mapping to CHANGES.md (#15871)

[noreply] Merge pull request #15784 from [BEAM-8135]  - Removing

[noreply] [BEAM-13119] Subdirectory prefix tag for Go SDK (#15881)

[noreply] Merge pull request #15852 from [BEAM-13102] [Playground] update

[Kyle Weaver] Fix typo: s/spark/twister2


------------------------------------------
[...truncated 341.49 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7918487ff3e2167c7fd65eb5bc00d48d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1958174945]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@307649172]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-wq460SWma404IxeH4GpbhmTrsPsdyGkKfzx-yBtHWMs.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6646685300132765611.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gm7il0NID2ImTTV4QaFV_gtdJa36-04og-BPG3yu7U4.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 2496d79ee130178d719ba56cce06ea66df7e7870ebc93f3dd59fe567819d7d05> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JJbXnuEwF41xm6VszgbqZt9-eHDryT891Z_lZ4GdfQU.pb
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_17_45_32-3784561410186712558?project=apache-beam-testing
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_17_45_32-3784561410186712558
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_17_45_32-3784561410186712558
    Nov 04, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T00:45:36.229Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:43.988Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.803Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.833Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.860Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.938Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.964Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.011Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.416Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.495Z: Starting 5 workers in us-central1-a...
    Nov 04, 2021 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:17.321Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:28.904Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:28.967Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Nov 04, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:39.446Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:02.259Z: Workers have started successfully.
    Nov 04, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:02.284Z: Workers have started successfully.
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.087Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.228Z: Cleaning up.
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.306Z: Stopping worker pool...
    Nov 04, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:49:52.832Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:49:52.881Z: Worker pool stopped.
    Nov 04, 2021 12:49:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_17_45_32-3784561410186712558 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 07c8576d-d8de-4c52-978d-96b19d320a6b and timestamp: 2021-11-04T00:49:58.585000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.819

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:49:58 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 44.362 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/cwrhxnbd4reym

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2625

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2625/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-13105] playground - add shorcut hint tooltip

[dpcollins] Performance improvement to PubSubLiteIO to not use a streaming committer

[noreply] Merge pull request #15854 from [BEAM-13046][Playground] protobuf

[noreply] [BEAM-12550] Parallelizable skew Implementation  (#15809)

[noreply] Merge pull request #15782 from [BEAM-13034] [Playground] add semantics

[hlagosperez] Use BigQuieryIO.loadProjectId  in WriteRename class to create

[noreply] [BEAM-13099] Use vendored Calcite 1.28.0 in SQL extensions (#15836)


------------------------------------------
[...truncated 351.67 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 232'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 232'
Successfully started process 'Gradle Test Executor 232'

Gradle Test Executor 232 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 7:09:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 7:09:21 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 7:09:22 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 7:09:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1958174945]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@307649172]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 7:09:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 7:09:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 7:09:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 7:09:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-j-IzoedawBtFeEQjTP_qkinAzKtlGqtdDTI4hpidSs4.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-nCFHg_Ok6oUNqJWQK4aGsEvd0a5KXUR12Et1-E8vKtA.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5536625703926143093.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pXm-nRgXs_lcCEVEJ9vb-adUaCE_Z3vZh5VChJGHlT0.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.1/1c51cee3b49ce084d1e0140b0344ff9151a61c3/beam-vendor-calcite-1_28_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.1-rC8q7_PRvA8jzRG9mkOwFoyPqh5LDK33P7R0J3HxuNY.jar
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 1 seconds
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 1fb3d0d4bbe9abd9019153e842179939b4768783aa4309e6eb8a0b1365d173cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-H7PQ1Lvpq9kBkVPoQheZObR2h4OqQwnm64oLE2XRc88.pb
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_12_09_39-4499515394267361049?project=apache-beam-testing
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_12_09_39-4499515394267361049
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_12_09_39-4499515394267361049
    Nov 03, 2021 7:09:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T19:09:42.559Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:50.175Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.145Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.185Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.219Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.294Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.315Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.367Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 7:09:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.714Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 7:09:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.786Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 7:09:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:56.839Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 7:10:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:31.894Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 7:11:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:58.272Z: Workers have started successfully.
    Nov 03, 2021 7:11:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:58.305Z: Workers have started successfully.
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.353Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.464Z: Cleaning up.
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.550Z: Stopping worker pool...
    Nov 03, 2021 7:14:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:14:00.886Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 7:14:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:14:00.982Z: Worker pool stopped.
    Nov 03, 2021 7:14:07 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_12_09_39-4499515394267361049 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cfc9a44a-962e-4507-816d-c014f3ac2f55 and timestamp: 2021-11-03T19:14:07.073000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      9.18

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 7:14:07 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 232 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 52.203 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 11m 18s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/x26oq4t425zya

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2624

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2624/display/redirect>

Changes:


------------------------------------------
[...truncated 338.08 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 12:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 12:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2135699149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7606525479549177092.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6Pf_rRZo9FHC4KgsnWgrOk3n0kTO4e1txJWFCKxbCjM.jar
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 010a725eb0ad2a1b0c23838c477f418f79093de9b022a685eef3031f25cc74bc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AQpyXrCtKhsMI4OMR39Bj3kJPemwIqaF7vMDHyXMdLw.pb
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_05_45_01-9053871630973985449?project=apache-beam-testing
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_05_45_01-9053871630973985449
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_05_45_01-9053871630973985449
    Nov 03, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T12:45:05.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.285Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.916Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.950Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.986Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.056Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.085Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.108Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.423Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.492Z: Starting 5 workers in us-central1-a...
    Nov 03, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:23.144Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:07.742Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:34.075Z: Workers have started successfully.
    Nov 03, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:34.094Z: Workers have started successfully.
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.277Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.402Z: Cleaning up.
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.483Z: Stopping worker pool...
    Nov 03, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:49:27.353Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:49:27.392Z: Worker pool stopped.
    Nov 03, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_05_45_01-9053871630973985449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4a192225-7014-4b47-959a-ca92c671ad69 and timestamp: 2021-11-03T12:49:34.179000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.421

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:49:34 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 48.631 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2btndnzz3rrze

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2623

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2623/display/redirect>

Changes:


------------------------------------------
[...truncated 337.52 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@876027468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1730798756]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5795617607859905281.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8FzY6dr2515Uc6k2q5vgTxguum7HFgvL6MV6d1T43dg.jar
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash fd23cb9ba13ed2dbe2a62eef9d00f3c0d51dd74d209819e091e9f8c6ff93b418> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_SPLm6E-0tvipi7vnQDzwNUd100gmBngken4xv-TtBg.pb
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_23_45_04-16838286692204333905?project=apache-beam-testing
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_23_45_04-16838286692204333905
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_23_45_04-16838286692204333905
    Nov 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T06:45:07.854Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:14.350Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.179Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.219Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.256Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.320Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.338Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.359Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.664Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.726Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:42.218Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:58.404Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:24.052Z: Workers have started successfully.
    Nov 03, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:24.078Z: Workers have started successfully.
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.600Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.721Z: Cleaning up.
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.788Z: Stopping worker pool...
    Nov 03, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:49:20.340Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:49:20.382Z: Worker pool stopped.
    Nov 03, 2021 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_23_45_04-16838286692204333905 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f30d6cb1-8c74-4b13-b46b-59f8e38672d6 and timestamp: 2021-11-03T06:49:26.204000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.816

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 6:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 37.881 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kfmfqqp6ee4bi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2622

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2622/display/redirect?page=changes>

Changes:

[Andrew Pilloud] [BEAM-13118] Use branch for nightly "tag", tags are immutable

[dmitrii_kuzin] [BEAM-12730] Python. Custom delimiter add corner case

[noreply] Minor: add resource key to presentation materials link (#15867)


------------------------------------------
[...truncated 337.40 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7070725395123530739.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GY5XTQWkkq5srSiHxHT3ygyhmw90uTm0UAMrc-03H3M.jar
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 669bbc407586561a0da573370b5ecb344d546fb7d9ea60c3057f4dc7326b1cac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Zpu8QHWGVhoNpXM3C17LNE1Ub7fZ6mDDBX9NxzJrHKw.pb
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_17_45_08-3250855308008233230?project=apache-beam-testing
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_17_45_08-3250855308008233230
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_17_45_08-3250855308008233230
    Nov 03, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T00:45:12.708Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:19.793Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.656Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.692Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.728Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.803Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.829Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.864Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:21.193Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:21.279Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:42.625Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:07.501Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:33.979Z: Workers have started successfully.
    Nov 03, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:34.014Z: Workers have started successfully.
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.147Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.298Z: Cleaning up.
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.382Z: Stopping worker pool...
    Nov 03, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:49:43.173Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:49:43.255Z: Worker pool stopped.
    Nov 03, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_17_45_08-3250855308008233230 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea8fcef7-13e7-47e2-843b-217f4aca3c90 and timestamp: 2021-11-03T00:49:52.543000000Z:
                     Metric:                    Value:
                   read_time                     8.287
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 0.651 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/us5b75cof2lhu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2621

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2621/display/redirect?page=changes>

Changes:

[Alexey Romanenko] [BEAM-12070] ParquetIO: use splittable reading by default

[noreply] [BEAM-13001] fixes nil reference error in Extractor.ExtractFrom (#15865)

[noreply] Merge pull request #15842 from [BEAM-13125][Playground] Update

[noreply] Merge pull request #15838 from [BEAM-13127] [Playground] Implement TCP

[noreply] Merge pull request #15832 from [BEAM-13149] Changing readWithPartitions


------------------------------------------
[...truncated 340.84 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 6:45:43 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 6:45:44 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 6:45:44 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 6:45:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 6:45:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 6:45:55 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7244616855344156846.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QcTO2FE2rdJhsT1DuYDlHyB5u62L6B2JxfVGSxa2d-4.jar
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8b1bc08211da6b968ddc53b1d99ca763ab9a87c01e01a1f9ce345a88dc94e2fe> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ixvAghHaa5aN3FOx2ZynY6uah8AeAaH5zjRaiNyU4v4.pb
    Nov 02, 2021 6:45:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_11_45_59-6555322559487291116?project=apache-beam-testing
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_11_45_59-6555322559487291116
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_11_45_59-6555322559487291116
    Nov 02, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T18:46:02.598Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:09.826Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.672Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.702Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.727Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.789Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.844Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.870Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:11.159Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:11.218Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:42.877Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:51.962Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:18.262Z: Workers have started successfully.
    Nov 02, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:18.287Z: Workers have started successfully.
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:48.958Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:49.522Z: Cleaning up.
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:49.610Z: Stopping worker pool...
    Nov 02, 2021 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:50:16.630Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:50:16.672Z: Worker pool stopped.
    Nov 02, 2021 6:50:23 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_11_45_59-6555322559487291116 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9b0f029b-19bf-446c-8bb5-8c8fef6b2c68 and timestamp: 2021-11-02T18:50:23.504000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.528

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:50:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 45.609 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 55s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/vgie4525rvl62

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2620

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2620/display/redirect>

Changes:


------------------------------------------
[...truncated 337.98 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 12:44:48 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 12:44:49 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3842048782514570956.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B6RU5pjy3QsQhPg_OyHW7a6dnrefS8u8c1vQ68b2Xus.jar
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9f916284b45e87b784dff29ff402647589963c71d357a4a07325d9c9b40ff061> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-n5FihLReh7eE3_Kf9AJkdYmWPHHTV6SgcyXZybQP8GE.pb
    Nov 02, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_05_45_01-7694486272515793678?project=apache-beam-testing
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_05_45_01-7694486272515793678
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_05_45_01-7694486272515793678
    Nov 02, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T12:45:04.664Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:11.089Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.020Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.068Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.103Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.207Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.248Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.275Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.647Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.725Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:33.974Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:56.823Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:23.076Z: Workers have started successfully.
    Nov 02, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:23.115Z: Workers have started successfully.
    Nov 02, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.255Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.406Z: Cleaning up.
    Nov 02, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.493Z: Stopping worker pool...
    Nov 02, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:49:09.904Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:49:09.959Z: Worker pool stopped.
    Nov 02, 2021 12:49:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_05_45_01-7694486272515793678 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6c50898b-f669-4c5e-811a-f417ee70a90f and timestamp: 2021-11-02T12:49:15.186000000Z:
                     Metric:                    Value:
                   read_time                     9.862
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:49:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 30.156 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b76zit6bf3iji

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2619

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2619/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13145][Playground]


------------------------------------------
[...truncated 340.34 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2135699149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3292997521250367930.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vac6rsy2zot7Ye1q6iGWKk9dM9M7hihMRK6U3xtnfmo.jar
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash aa9868d6a69d306538efc2305917dfe40928aa9a0b30a22cd9b65907ae365b40> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qpho1qadMGU478IwWRff5AkoqpoLMKIs2bZZB642W0A.pb
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_23_45_10-6157933829983358251?project=apache-beam-testing
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_23_45_10-6157933829983358251
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_23_45_10-6157933829983358251
    Nov 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T06:45:13.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:21.648Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.243Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.272Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.293Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.370Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.400Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.429Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.774Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.854Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:52.273Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:02.349Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 6:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:29.915Z: Workers have started successfully.
    Nov 02, 2021 6:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:29.948Z: Workers have started successfully.
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.219Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.361Z: Cleaning up.
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.438Z: Stopping worker pool...
    Nov 02, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:49:21.175Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:49:21.219Z: Worker pool stopped.
    Nov 02, 2021 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_23_45_10-6157933829983358251 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4009beec-72e5-4360-9687-9249b74ff4ed and timestamp: 2021-11-02T06:49:26.885000000Z:
                     Metric:                    Value:
                   read_time                     8.163
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 36.408 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xcnt5kaymwwxc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2618

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2618/display/redirect?page=changes>

Changes:

[avilovpavel6] Added cleanup os env before and after tests

[noreply] [BEAM-13001] fixes Golint issues (#15859)

[noreply] Merge pull request #15840 from [Playground][BEAM-13146][Bugfix]


------------------------------------------
[...truncated 346.24 KB...]
Gradle Test Executor 15 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 15'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 15'
Successfully started process 'Gradle Test Executor 15'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-OSJn6rDYScnOqVtjqjWjn4keTL2V-mBSDXbhyjSfST0.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test803957735624882856.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-y2G5I8M9kBqBSuKvgS5sb2AJcWFXLFuhWpFb-qxDtMc.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 1 seconds
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 40e7a61a8b1022ef41ae767277a5d9292013a982afc95351071c750e083fc63e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QOemGosQIu9BrnZyd6XZKSATqYKvyVNRBxx1Dgg_xj4.pb
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_17_45_16-13039375285617828759?project=apache-beam-testing
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_17_45_16-13039375285617828759
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_17_45_16-13039375285617828759
    Nov 02, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T00:45:19.836Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:26.386Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.091Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.121Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.155Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.239Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.269Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.300Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.700Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:36.899Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:12.033Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:37.384Z: Workers have started successfully.
    Nov 02, 2021 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:37.402Z: Workers have started successfully.
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.153Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.294Z: Cleaning up.
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.367Z: Stopping worker pool...
    Nov 02, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:49:26.343Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:49:26.379Z: Worker pool stopped.
    Nov 02, 2021 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_17_45_16-13039375285617828759 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8438149d-de28-40aa-b99d-1ec3e4b5cc57 and timestamp: 2021-11-02T00:49:31.834000000Z:
                     Metric:                    Value:
                   read_time                     8.296
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 15 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 33.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/volsy5iuok6fq

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2617

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2617/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11936] Fix errorprone warnings (#15821)


------------------------------------------
[...truncated 347.10 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 6:45:15 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3405281722735097461.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qT-Q3fyfdvPuFsH3R1NAkPgT8PJT9fKtrNItXP9V8Ko.jar
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8eda6dd26c3ca6753e6a567dbb4bad3357fab6a2faf9bb99c46fe8fde8832b9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jtpt0mw8pnU-alZ9u0utM1f6tqL6-buZxG_o_eiDK50.pb
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_11_45_28-16407113518321098892?project=apache-beam-testing
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_11_45_28-16407113518321098892
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_11_45_28-16407113518321098892
    Nov 01, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T18:45:32.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:38.968Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.638Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.695Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.721Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.776Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.804Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.857Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:40.169Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:40.246Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:04.945Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:24.967Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:51.351Z: Workers have started successfully.
    Nov 01, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:51.375Z: Workers have started successfully.
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:22.817Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:23.107Z: Cleaning up.
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:23.190Z: Stopping worker pool...
    Nov 01, 2021 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:49:35.917Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:49:35.964Z: Worker pool stopped.
    Nov 01, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_11_45_28-16407113518321098892 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 75d5baac-d5ef-4a7e-a661-d0319742e5e6 and timestamp: 2021-11-01T18:49:44.186000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.351

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 33.779 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/bwd2cenuqqalq

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2616

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2616/display/redirect>

Changes:


------------------------------------------
[...truncated 338.24 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test996361637803596029.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7vBJw3xz9nE-MFn_YCL-kyxiSYBTh7lKy7d33tCiM5Y.jar
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9a518e1a33aead2d7c93fdfcf6dff801298cd300b81a5cae909b83fd0fd95e42> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mlGOGjOurS18k_389t_4ASmM0wC4GlyukJuD_Q_ZXkI.pb
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_05_45_09-5328820062955668730?project=apache-beam-testing
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_05_45_09-5328820062955668730
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_05_45_09-5328820062955668730
    Nov 01, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T12:45:12.132Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:17.657Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.396Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.443Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.467Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.564Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.593Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.627Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.957Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:19.035Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:27.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:03.047Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:28.105Z: Workers have started successfully.
    Nov 01, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:28.145Z: Workers have started successfully.
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:55.948Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:56.077Z: Cleaning up.
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:56.156Z: Stopping worker pool...
    Nov 01, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:49:22.246Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:49:22.286Z: Worker pool stopped.
    Nov 01, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_05_45_09-5328820062955668730 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ff6d4c7e-c40b-4b75-acf2-99c1f9027612 and timestamp: 2021-11-01T12:49:28.214000000Z:
                     Metric:                    Value:
                   read_time                     9.086
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 39.12 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/o74ovdvw6xuwm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2615

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2615/display/redirect>

Changes:


------------------------------------------
[...truncated 339.24 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1278691954197753490.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8-s4wE0QdAf7vuorunZ2HiYBUrNG7vGkUkzRWhCH0mA.jar
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b9540334cccc8ece51b8d8fa27c9e4c2c3a6111acf82b22d98f57de770fec7d1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uVQDNMzMjs5RuNj6J8nkwsOmERrPgrItmPV953D-x9E.pb
    Nov 01, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_23_45_09-11723723560549017728?project=apache-beam-testing
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_23_45_09-11723723560549017728
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_23_45_09-11723723560549017728
    Nov 01, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T06:45:14.760Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:21.202Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.143Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.183Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.210Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.285Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.311Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.344Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.661Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.729Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:28.298Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:54.361Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:54.387Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Nov 01, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:04.617Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:29.753Z: Workers have started successfully.
    Nov 01, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:29.784Z: Workers have started successfully.
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.657Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.785Z: Cleaning up.
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.865Z: Stopping worker pool...
    Nov 01, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:49:22.985Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:49:23.073Z: Worker pool stopped.
    Nov 01, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_23_45_09-11723723560549017728 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c94dbc5e-23b1-48da-b027-b875998264a4 and timestamp: 2021-11-01T06:49:30.272000000Z:
                     Metric:                    Value:
                   read_time                     10.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 39.545 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/56gjcngjnnhd6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2614

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2614/display/redirect>

Changes:


------------------------------------------
[...truncated 339.42 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1785069675]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@899766210]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4463813952263075272.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XTfUz0OREnp_VWMcN2KIpFSyqxsy5hvA7cBHm99jZYY.jar
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104896 bytes, hash 5e12502ef02032396fe0b155b52fc5aedeb461e1439dd2a310450f405ca094a7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-XhJQLvAgMjlv4LFVtS_Frt60YeFDndKjEEUPQFyglKc.pb
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_17_45_03-8308257496170417740?project=apache-beam-testing
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_17_45_03-8308257496170417740
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_17_45_03-8308257496170417740
    Nov 01, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T00:45:09.581Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:14.936Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.536Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.576Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.602Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.666Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.689Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.837Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:16.237Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:16.306Z: Starting 5 workers in us-central1-a...
    Nov 01, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:44.175Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:56.838Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:24.217Z: Workers have started successfully.
    Nov 01, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:24.254Z: Workers have started successfully.
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.765Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.894Z: Cleaning up.
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.982Z: Stopping worker pool...
    Nov 01, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:49:19.885Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:49:19.917Z: Worker pool stopped.
    Nov 01, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_17_45_03-8308257496170417740 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    Load test results for test (ID): 311aa12b-047d-427d-91be-61fb0480a630 and timestamp: 2021-11-01T00:49:26.477000000Z:
                     Metric:                    Value:
                   read_time                     7.593
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 39.234 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6nry5axhv7mmm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2613

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2613/display/redirect>

Changes:


------------------------------------------
[...truncated 339.20 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5176460154054228143.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rjl6cpA24uQPJb6if8ugvHBLcTI9r8AgNOzQB8HWbz8.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c8cf0147efef9c70eb531945060f07d654c6d6c861ba2ed2f1ba862c610248af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yM8BR-_vnHDrUxlFBg8H1lTG1shhui7S8bqGLGECSK8.pb
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_11_45_04-9800440872836558494?project=apache-beam-testing
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_11_45_04-9800440872836558494
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_11_45_04-9800440872836558494
    Oct 31, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T18:45:07.260Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.004Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.696Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.739Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.788Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.864Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.899Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.951Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:15.303Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:15.392Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:45.216Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:50.848Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:50.881Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 31, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:01.258Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:25.505Z: Workers have started successfully.
    Oct 31, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:25.536Z: Workers have started successfully.
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:56.747Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:56.918Z: Cleaning up.
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:57.004Z: Stopping worker pool...
    Oct 31, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:49:22.070Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:49:22.110Z: Worker pool stopped.
    Oct 31, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_11_45_04-9800440872836558494 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da9f84e8-ebd6-4528-8df7-82f24dc20c8d and timestamp: 2021-10-31T18:49:28.349000000Z:
                     Metric:                    Value:
                   read_time                    10.991
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 41.251 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ivcdgxrmqzo6m

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2612

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2612/display/redirect>

Changes:


------------------------------------------
[...truncated 338.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 12:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4673906120842993167.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RvOhR2-u4RYJ9c_iDXufjodi9WDKeIxY1Wim3pRCkOw.jar
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 500a4585e3d19f01629912fd3c44e3d2cdba2f0fccda22019f18bb1dfd833d01> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UApFhePRnwFimRL9PETj0s26Lw_M2iIBnxi7Hf2DPQE.pb
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_05_45_05-5378782084699586586?project=apache-beam-testing
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_05_45_05-5378782084699586586
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_05_45_05-5378782084699586586
    Oct 31, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T12:45:09.051Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:17.921Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.679Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.714Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.750Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.816Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.882Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:19.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:19.376Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:41.091Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:05.128Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:32.380Z: Workers have started successfully.
    Oct 31, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:32.460Z: Workers have started successfully.
    Oct 31, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.594Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.753Z: Cleaning up.
    Oct 31, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.828Z: Stopping worker pool...
    Oct 31, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:49:25.409Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:49:25.459Z: Worker pool stopped.
    Oct 31, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_05_45_05-5378782084699586586 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b81ad903-b309-4e5a-8a01-2dba8edd5019 and timestamp: 2021-10-31T12:49:31.966000000Z:
                     Metric:                    Value:
                   read_time                     7.043
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.234 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a74a3p4ahu22w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2611

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2611/display/redirect>

Changes:


------------------------------------------
[...truncated 338.35 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 6:45:28 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test362942635619535858.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YW66o3GoK0jV1NRr0ewvjrLL635jZgtq5bobD1Cf34k.jar
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash be6dec7821ddfc6faef0eb17c46931b0bad1900813e53c4ded742acb8af8de9c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vm3seCHd_G-u8OsXxGkxsLrRkAgT5TxN7XQqy4r43pw.pb
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_23_45_40-18233134485701076680?project=apache-beam-testing
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_23_45_40-18233134485701076680
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_23_45_40-18233134485701076680
    Oct 31, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T06:45:43.861Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:50.568Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.566Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.787Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.860Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.969Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.011Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.034Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.356Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.443Z: Starting 5 workers in us-central1-a...
    Oct 31, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:46:17.265Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:46:41.557Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:08.562Z: Workers have started successfully.
    Oct 31, 2021 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:08.590Z: Workers have started successfully.
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.406Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.550Z: Cleaning up.
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.629Z: Stopping worker pool...
    Oct 31, 2021 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:49:59.361Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:49:59.401Z: Worker pool stopped.
    Oct 31, 2021 6:50:04 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_23_45_40-18233134485701076680 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dba154f1-7f83-4552-b9e3-ca76d885074d and timestamp: 2021-10-31T06:50:04.828000000Z:
                     Metric:                    Value:
                   read_time                     9.319
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:50:05 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.228 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dijcrzgt7t542

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2610

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2610/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15753 from [BEAM-13080] Add option in Reshuffle to


------------------------------------------
[...truncated 338.00 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 12:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@576744789]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2102085350]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7047443652272399555.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZX3BGsszZ0VCEKSwZ20YewebUiQ2wnQsf7X1DaMRB6s.jar
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash ecda6e7969e9eac8e48d10e668d86b9af9a19983c08518158a55ccf16f8addc5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7NpueWnp6sjkjRDmaNhrmvmhmYPAhRgVilXM8W-K3cU.pb
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_17_45_02-11466525833592608079?project=apache-beam-testing
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_17_45_02-11466525833592608079
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_17_45_02-11466525833592608079
    Oct 31, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T00:45:05.959Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:12.551Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.375Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.405Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.430Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.500Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.525Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.558Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.896Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.960Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:43.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:59.601Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 12:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:25.408Z: Workers have started successfully.
    Oct 31, 2021 12:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:25.434Z: Workers have started successfully.
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.474Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.620Z: Cleaning up.
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.692Z: Stopping worker pool...
    Oct 31, 2021 12:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:49:16.425Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 12:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:49:16.460Z: Worker pool stopped.
    Oct 31, 2021 12:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_17_45_02-11466525833592608079 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1e6807df-df56-4817-a6fc-2b7a1b68e7cd and timestamp: 2021-10-31T00:49:23.187000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.629

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 37.261 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ibd2lmet5kuvk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2609

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2609/display/redirect>

Changes:


------------------------------------------
[...truncated 337.59 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1100293419112175705.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-JQ7X8EXJUUk8jpxivWfsvAJlU0-NApT423ykPVtZxqE.jar
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash be466095046eb815bf7c58ecc4ef48508672e585a9760786b13b09d30ae0ead9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vkZglQRuuBW_fFjsxO9IUIZy5YWpdgeGsTsJ0wrg6tk.pb
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_11_45_07-15044799619051388739?project=apache-beam-testing
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_11_45_07-15044799619051388739
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_11_45_07-15044799619051388739
    Oct 30, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T18:45:10.669Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:16.441Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.274Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.306Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.340Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.400Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.431Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.463Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.758Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.828Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:25.712Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:11.138Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:36.079Z: Workers have started successfully.
    Oct 30, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:36.102Z: Workers have started successfully.
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.334Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.468Z: Cleaning up.
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.536Z: Stopping worker pool...
    Oct 30, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:49:31.648Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:49:31.682Z: Worker pool stopped.
    Oct 30, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_11_45_07-15044799619051388739 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 57eae8b2-f3f5-46e7-a3f7-669d4e5b2806 and timestamp: 2021-10-30T18:49:37.802000000Z:
                     Metric:                    Value:
                   read_time                      9.13
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 49.153 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/fqn5rxbnyaawc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2608

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2608/display/redirect>

Changes:


------------------------------------------
[...truncated 338.23 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3829206006658264483.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6JCsJTYYoMb46kZK-Rw_SMv7J5S5MuMc13Zhnf1aYeQ.jar
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b2881793868dbc923e9e4b311f91ae3bd6e53aba30bf7b5cfe0f440659ae0675> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sogXk4aNvJI-nksxH5GuO9blOrowv3tc_g9EBlmuBnU.pb
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_05_45_05-10547139370384166789?project=apache-beam-testing
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_05_45_05-10547139370384166789
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_05_45_05-10547139370384166789
    Oct 30, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T12:45:08.910Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:17.709Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.427Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.467Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.485Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.560Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.594Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.626Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:19.183Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:19.243Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:46.302Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:04.449Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:31.360Z: Workers have started successfully.
    Oct 30, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:31.396Z: Workers have started successfully.
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:59.918Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:47:00.048Z: Cleaning up.
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:47:00.110Z: Stopping worker pool...
    Oct 30, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:49:21.101Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:49:21.140Z: Worker pool stopped.
    Oct 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_05_45_05-10547139370384166789 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 14a22a52-17db-458e-9e7c-275dfcd9ff30 and timestamp: 2021-10-30T12:49:37.629000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.517

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 49.319 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2ckiu7r2m54sa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2607

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2607/display/redirect>

Changes:


------------------------------------------
[...truncated 338.82 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6467173426407546027.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ShAbNeJhTpKibmm9fROrOqlHV3JpbdGgdoJWtDcG7Gw.jar
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 38853d9b66e72dd46077ff922f2098d5e15f20bb8f99a9c0274acb779695fa5d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-OIU9m2bnLdRgd_-SLyCY1eFfILuPmanAJ0rLd5aV-l0.pb
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_23_45_02-11753447552064274059?project=apache-beam-testing
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_23_45_02-11753447552064274059
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_23_45_02-11753447552064274059
    Oct 30, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T06:45:07.559Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:14.193Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:14.967Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.089Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.187Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.343Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.393Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.470Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:16.103Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:16.182Z: Starting 5 workers in us-central1-c...
    Oct 30, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:26.747Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:01.538Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:27.396Z: Workers have started successfully.
    Oct 30, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:27.422Z: Workers have started successfully.
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.073Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.246Z: Cleaning up.
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.314Z: Stopping worker pool...
    Oct 30, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:49:22.849Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:49:22.896Z: Worker pool stopped.
    Oct 30, 2021 6:49:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_23_45_02-11753447552064274059 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b416a8d0-972f-4454-9902-35be5998682c and timestamp: 2021-10-30T06:49:28.795000000Z:
                     Metric:                    Value:
                   read_time                     7.355
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:49:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 42.314 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/zeqn3rei5rucg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2606

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2606/display/redirect?page=changes>

Changes:

[Luke Cwik] [BEAM-13015] Migrate bundle processing in the SDK harness to using

[Brian Hulette] [BEAM-13099] Copy vendored calcite build for 1.28.0

[Brian Hulette] [BEAM-13099] Modifications for vendor/calcite-1_28_0 build

[noreply] [BEAM-13001] DoFn metrics for Go SDK (#15657)


------------------------------------------
[...truncated 354.00 KB...]
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 12:56:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 12:56:12 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 12:56:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 12:56:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 12:56:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 12:56:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-0NLdms4PlDCbuhdFfquuZqvkjQupN7vd8rr0ZBuhhrw.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6662209333345085913.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2lclFvWKW20kc4X9PqTgETVIc3k02D3GcIqg8FYRrqQ.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 1 seconds
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1e76ec5cd0d64801c5c342a99566acffdacb17359bee1509da44e688fef19892> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HnbsXNDWSAHFw0KplWas_9rLFzWb7hUJ2kTmiP7xmJI.pb
    Oct 30, 2021 12:56:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_17_56_25-15640718372973521731?project=apache-beam-testing
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_17_56_25-15640718372973521731
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_17_56_25-15640718372973521731
    Oct 30, 2021 12:56:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T00:56:28.760Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 12:56:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:35.857Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.638Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.676Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.707Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.780Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.801Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.825Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:37.162Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:37.228Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 12:57:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:10.439Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 12:57:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:23.755Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 12:57:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:48.346Z: Workers have started successfully.
    Oct 30, 2021 12:57:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:48.374Z: Workers have started successfully.
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.559Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.678Z: Cleaning up.
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.775Z: Stopping worker pool...
    Oct 30, 2021 1:00:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T01:00:38.095Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 1:00:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T01:00:38.133Z: Worker pool stopped.
    Oct 30, 2021 1:00:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_17_56_25-15640718372973521731 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ec4815a-68d4-43c8-bb52-919b2c5698ab and timestamp: 2021-10-30T01:00:43.691000000Z:
                     Metric:                    Value:
                   read_time                     7.487
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 1:00:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 35.502 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 20s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/seguozwjd5mom

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2605

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2605/display/redirect?page=changes>

Changes:

[noreply] [BEAM-8958] Use AWS credentials provider with BasicKinesisProvider

[noreply] [BEAM-13099] Use BlockBuilder.add(..) rather than

[noreply] Minor: Remove broken python compatibility checks (#15828)

[noreply] [BEAM-12047] Updates CHANGES.md to mention the URN convention (#15845)


------------------------------------------
[...truncated 341.07 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ffef37f1f512e2ff72d7df90c2e756c9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 6:46:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 6:46:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 6:46:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 6:46:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 6:46:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 6:46:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-6U1fnteJpM87FvukrS9vCPZ0ldEVRYGcZemwHs5aJZc.jar
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6666071151916232178.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LrcfuP3Pt6SBQrKQaHskQX9GXsOz1O9UjxS4Uh2EroI.jar
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 3ae85454b2661e9f6de8ec7341e761635c1a0b1e3fdb9abb158611f9899e5dbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-OuhUVLJmHp9t6OxzQedhY1waCx4_25q7FYYR-YmeXbw.pb
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_11_46_16-1680074798003155344?project=apache-beam-testing
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_11_46_16-1680074798003155344
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_11_46_16-1680074798003155344
    Oct 29, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T18:46:20.058Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:28.712Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.468Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.535Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.587Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.658Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.693Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.724Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:30.161Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:30.255Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:57.268Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:11.245Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:11.278Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 29, 2021 6:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:21.598Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:47:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:44.444Z: Workers have started successfully.
    Oct 29, 2021 6:47:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:44.478Z: Workers have started successfully.
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.383Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.771Z: Cleaning up.
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.878Z: Stopping worker pool...
    Oct 29, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:50:43.818Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 6:50:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:50:43.864Z: Worker pool stopped.
    Oct 29, 2021 6:50:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_11_46_16-1680074798003155344 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c4a03450-614a-4f3f-b4f3-cc4d35f33a35 and timestamp: 2021-10-29T18:50:50.225000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.573

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:50:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 52.19 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 32s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/pfgi7knuulu7o

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2604

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2604/display/redirect>

Changes:


------------------------------------------
[...truncated 346.29 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7b3e4c96a207d7f212569f92f13d6cf6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 12:45:22 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8305812740099050526.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6ycfLpUzFen-qI8LH97jsEq_U2djbFah2Y6k8ol5B94.jar
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c4471dd28bd9fe94805f77d2821c712c3f01b180973a7fbc171bb26736c9dad> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DERx3Si9n-lIBfd9KCHHEsPwGxgJc6f7wXG7JnNsna0.pb
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_05_45_37-10434285753432183194?project=apache-beam-testing
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_05_45_37-10434285753432183194
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_05_45_37-10434285753432183194
    Oct 29, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T12:45:43.067Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:50.024Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.053Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.241Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.306Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.377Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.411Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.474Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.761Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.827Z: Starting 5 workers in us-central1-a...
    Oct 29, 2021 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:56.458Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:28.640Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:28.669Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 29, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:39.184Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:03.518Z: Workers have started successfully.
    Oct 29, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:03.556Z: Workers have started successfully.
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.184Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.329Z: Cleaning up.
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.414Z: Stopping worker pool...
    Oct 29, 2021 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:50:10.021Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:50:10.259Z: Worker pool stopped.
    Oct 29, 2021 12:50:20 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_05_45_37-10434285753432183194 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4875e6fd-1916-4c38-b466-1ce69e34acbf and timestamp: 2021-10-29T12:50:20.168000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.647

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:50:20 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 2.569 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 59s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/wryxoerncpd5k

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2603

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2603/display/redirect?page=changes>

Changes:

[artur.khanin] Implemented GetCompileOutput and covered with unit tests

[artur.khanin] Replaced logging library regarding PR comment

[aydar.zaynutdinov] [BEAM-13120][Playground]

[Kyle Weaver] [BEAM-13143] Fix python doc generator error.

[Pablo Estrada] Handle runner-provided shards for TextIO

[noreply] Merge pull request #15721 from [BEAM-13023][Playground] Implement Redis

[noreply] Merge pull request #15802 from [BEAM-12970][Playground] Implement gRPC

[noreply] Merge pull request #15800 from [BEAM-12970][Playground] Implement gRPC


------------------------------------------
[...truncated 342.92 KB...]
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3840560413878150944.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-s_btb8uGFWCFZHueULyLYDbqf-rvKa2-1TgwWnICN5Y.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 1 seconds
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 63d9ecd05bfe9ee7607f41942fcb076ad0ebc1f570b6b24c7e733de0a250b6e7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Y9ns0Fv-nudgf0GUL8sHatDrwfVwtrJMfnM94KJQtuc.pb
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_23_45_16-11740413233974374282?project=apache-beam-testing
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_23_45_16-11740413233974374282
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_23_45_16-11740413233974374282
    Oct 29, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T06:45:20.779Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:37.252Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:38.673Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.162Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.293Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.692Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.875Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.052Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.792Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.874Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:51.785Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:26.574Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:52.352Z: Workers have started successfully.
    Oct 29, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:52.383Z: Workers have started successfully.
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.380Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.541Z: Cleaning up.
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.618Z: Stopping worker pool...
    Oct 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:49:46.756Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:49:46.807Z: Worker pool stopped.
    Oct 29, 2021 6:49:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_23_45_16-11740413233974374282 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2ffa4a8e-be28-4f6b-b389-1955df1976d9 and timestamp: 2021-10-29T06:49:59.160000000Z:
                     Metric:                    Value:
                   read_time                     9.594
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 2.578 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 39s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4lrmg76nctdg6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2602

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2602/display/redirect>

Changes:


------------------------------------------
[...truncated 342.77 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cdb611d8bfd52e82910a87e269abc932
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 12:46:47 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 12:46:49 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 12:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1363274082]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@758955551]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 12:47:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6331128248829274349.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mtI5owoS0u5ESDk8500ymZ6nvb70IsnApMyjSVZrz8c.jar
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 3 seconds
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash 323429202f7c910a9d7698eb73776a2fd060bac8894c53f837a6f3907e1f4920> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MjQpIC98kQqddpjrc3dqL9BgusiJTFP4N6bzkH4fSSA.pb
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_17_47_17-16482579381554673905?project=apache-beam-testing
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_17_47_17-16482579381554673905
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_17_47_17-16482579381554673905
    Oct 29, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T00:47:20.302Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:32.279Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.012Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.044Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.078Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.163Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.196Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.229Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:34.080Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 12:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:35.347Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:13.283Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:38.878Z: Workers have started successfully.
    Oct 29, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:38.913Z: Workers have started successfully.
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:08.159Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:09.361Z: Cleaning up.
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:09.594Z: Stopping worker pool...
    Oct 29, 2021 12:51:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:51:33.932Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 12:51:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:51:33.968Z: Worker pool stopped.
    Oct 29, 2021 12:51:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_17_47_17-16482579381554673905 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7f8ff1b9-0911-41b7-af48-5b0f4c406a67 and timestamp: 2021-10-29T00:51:42.150000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.203

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:51:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 2.872 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 14s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/d2xgv37gtklc4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2601

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2601/display/redirect?page=changes>

Changes:

[daria.malkova] fix executor tests

[mmack] [BEAM-13138] Update code to retrieve container endpoint

[noreply] Merge pull request #15709 from [BEAM-12967] [Playground] Create Example

[noreply] Merge pull request #15294 from [BEAM-11986] Spanner write metric

[noreply] [BEAM-13130] Remove persistent references to stateKeyReaders (#15815)

[Brian Hulette] [BEAM-13099] Replace call to RelNode.metadata with BeamRelMetadataQuery

[Brian Hulette] [BEAM-13099] Update all rels to use BeamRelMetadataQuery


------------------------------------------
[...truncated 341.88 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cdb611d8bfd52e82910a87e269abc932
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 6:46:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 6:46:15 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 6:46:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 6:46:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 6:46:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-pyg7GrYof-707XaIUnVsamEoMf000KZSZ062WgHP3oI.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2875733142419718918.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DDEGeEq_m0K79oEeP8hb4dezS_wGXS479xC3KV7qjsM.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests--bul8qLquE7A_FJqWm2fDi-wEYmamMPx4IVgBJLFIJc.jar
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 56703fd3425e8fea04ff5229d403e025d7adb6f9b8e78c8cd6a4727593fde6d2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VnA_00Jej-oE_1Ip1APgJdettvm454yM1qRydZP95tI.pb
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_11_46_28-11445409060149932431?project=apache-beam-testing
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_11_46_28-11445409060149932431
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_11_46_28-11445409060149932431
    Oct 28, 2021 6:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T18:46:31.079Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:39.756Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.608Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.649Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.679Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.802Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.854Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.897Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:41.287Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:41.373Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:56.442Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 6:47:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:27.321Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 6:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:52.749Z: Workers have started successfully.
    Oct 28, 2021 6:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:52.786Z: Workers have started successfully.
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.630Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.773Z: Cleaning up.
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.869Z: Stopping worker pool...
    Oct 28, 2021 6:50:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:50:48.807Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 6:50:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:50:48.844Z: Worker pool stopped.
    Oct 28, 2021 6:50:53 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_11_46_28-11445409060149932431 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bfc0952f-8f56-47a5-922c-3994641caac2 and timestamp: 2021-10-28T18:50:54.023000000Z:
                     Metric:                    Value:
                   read_time                     9.587
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:50:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.738 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 32s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/dmwrb4pxpg6ym

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2600

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2600/display/redirect>

Changes:


------------------------------------------
[...truncated 337.07 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 67 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 67'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 67'
Successfully started process 'Gradle Test Executor 67'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 12:44:37 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 12:44:38 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 12:44:38 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 12:44:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7560152266624824864.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GwAWYyY-angsnSTYdL76SchfdCUGrTr78TophpKj5_4.jar
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 09553327c88a9b4bf549a3cddf4c47e2df57086a2294168f1542b05da6b0eb6a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-CVUzJ8iKm0v1SaPN30xH4t9XCGoilBaPFUKwXaaw62o.pb
    Oct 28, 2021 12:44:49 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_05_44_50-18135648208084954008?project=apache-beam-testing
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_05_44_50-18135648208084954008
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_05_44_50-18135648208084954008
    Oct 28, 2021 12:44:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T12:44:53.562Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:02.666Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.451Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.534Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.587Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.735Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.787Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.832Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:04.370Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:04.460Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:30.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:49.666Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:16.361Z: Workers have started successfully.
    Oct 28, 2021 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:16.403Z: Workers have started successfully.
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:49.832Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:50.049Z: Cleaning up.
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:50.198Z: Stopping worker pool...
    Oct 28, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:49:09.690Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:49:09.744Z: Worker pool stopped.
    Oct 28, 2021 12:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_05_44_50-18135648208084954008 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 17a4f0a8-d078-4155-87d8-c28eecc63f23 and timestamp: 2021-10-28T12:49:17.227000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    12.409

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 67 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 44.642 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/bh6g37te72cvg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2599

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2599/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-12970][Playground]

[aydar.zaynutdinov] [BEAM-12970][Playground]

[eugene.nikolayev] Fix Python docs build pre-commit failures.

[aydar.zaynutdinov] [BEAM-12970][Playground]

[aydar.zaynutdinov] [BEAM-12970][Playground]


------------------------------------------
[...truncated 339.83 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4787234441660124364.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DaFQrDhZSQVckGS-jAI6xlIbcKHYPsDJE7j-bgZxato.jar
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9b09f064391c9933764fa0a140aafa200ebaeabd273f4e84f31ae1b3fd0033e8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mwnwZDkcmTN2T6ChQKr6IA666r0nP06E8xrhs_0AM-g.pb
    Oct 28, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_23_45_26-12003482978697856457?project=apache-beam-testing
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_23_45_26-12003482978697856457
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_23_45_26-12003482978697856457
    Oct 28, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T06:45:29.481Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:34.627Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.417Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.454Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.483Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.535Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.567Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.603Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.960Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:36.067Z: Starting 5 workers in us-central1-a...
    Oct 28, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:07.499Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:22.238Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:46.952Z: Workers have started successfully.
    Oct 28, 2021 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:46.985Z: Workers have started successfully.
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.418Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.559Z: Cleaning up.
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.627Z: Stopping worker pool...
    Oct 28, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:49:46.788Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:49:46.855Z: Worker pool stopped.
    Oct 28, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_23_45_26-12003482978697856457 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d748ecc5-b726-4239-a0e9-9d31524f5959 and timestamp: 2021-10-28T06:49:53.359000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.46

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 45.485 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/35hc56imup2ns

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2598

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2598/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13066, BEAM-13087] Workaround for coder type combiner packing

[noreply] [BEAM-13066] Disable using abstract iterable by default. (#15805)


------------------------------------------
[...truncated 340.58 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 12:45:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5598585220275370564.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BZ6VU6Xzg0o23EE2nYrr8Xbo_FKn6WYjIdMhpPUuLwA.jar
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 5984e1dd14a50e50345099a88b05ed4873b34fc45b5756a7c653e7aa645b4852> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-WYTh3RSlDlA0UJmoiwXtSHOzT8RbV1anxlPnqmRbSFI.pb
    Oct 28, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_17_45_31-14969144643146501529?project=apache-beam-testing
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_17_45_31-14969144643146501529
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_17_45_31-14969144643146501529
    Oct 28, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T00:45:34.473Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:42.543Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.345Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.410Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.443Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.529Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.571Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.606Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:44.109Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:44.191Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:08.576Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:23.702Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:49.122Z: Workers have started successfully.
    Oct 28, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:49.158Z: Workers have started successfully.
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.184Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.353Z: Cleaning up.
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.474Z: Stopping worker pool...
    Oct 28, 2021 12:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:49:40.020Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 12:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:49:40.089Z: Worker pool stopped.
    Oct 28, 2021 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_17_45_31-14969144643146501529 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e909d565-a8ed-4b80-bc76-e9c3807411b7 and timestamp: 2021-10-28T00:49:46.751000000Z:
                     Metric:                    Value:
                   read_time                     6.924
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 34.207 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/iantowjpkvvx6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2597

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2597/display/redirect?page=changes>

Changes:

[relax] support json

[Andrew Pilloud] [BEAM-13118] Don't force push to master

[noreply] Merge pull request #15731: [BEAM-13067]  Mark GroupIntoBatches output as


------------------------------------------
[...truncated 340.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 6:54:41 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 6:54:42 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 6:54:42 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 6:54:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8356130489921905775.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EvmbEY4m5Fwb6WA3h20VrKyaHbAJF8O4T28n7DJxMc4.jar
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2951c5cbb7d2f241b4cd8720b8b6b6894f53b72c98e72adab7bf82cca305472b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KVHFy7fS8kG0zYcguLa2iU9TtyyY5yrat7-CzKMFRys.pb
    Oct 27, 2021 6:54:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_11_54_56-253676225406959320?project=apache-beam-testing
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_11_54_56-253676225406959320
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_11_54_56-253676225406959320
    Oct 27, 2021 6:55:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T18:54:59.490Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:05.216Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.046Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.074Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.098Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.145Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.164Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.189Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.439Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.505Z: Starting 5 workers in us-central1-c...
    Oct 27, 2021 6:55:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:14.765Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 6:55:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:48.336Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 6:56:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:14.619Z: Workers have started successfully.
    Oct 27, 2021 6:56:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:14.648Z: Workers have started successfully.
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.564Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.701Z: Cleaning up.
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.755Z: Stopping worker pool...
    Oct 27, 2021 6:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:59:11.801Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 6:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:59:11.836Z: Worker pool stopped.
    Oct 27, 2021 6:59:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_11_54_56-253676225406959320 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0acc9e99-6318-4ba7-99bc-9f208f29a8b7 and timestamp: 2021-10-27T18:59:17.677000000Z:
                     Metric:                    Value:
                   read_time                     8.821
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:59:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 40.766 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/boio2beypbzki

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2596

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2596/display/redirect?page=changes>

Changes:

[mmack] [BEAM-8958] Use AWS credentials provider with BasicKinesisProvider (AWS


------------------------------------------
[...truncated 338.19 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6bf5391d49e2e1a5a9437f0674ac4d43
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 12:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6083070067652228216.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G-WChelNsnAhHz9nSpygxHAtd3jeqfJnFy1YNnNXPKA.jar
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 19f61bcb318900ae56e6668214b12d9848b83c7bd49402b1df20ab01e7f363cd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GfYbyzGJAK5W5maCFLEtmEi4PHvUlAKx3yCrAefzY80.pb
    Oct 27, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_05_45_14-15978571298395169035?project=apache-beam-testing
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_05_45_14-15978571298395169035
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_05_45_14-15978571298395169035
    Oct 27, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T12:45:17.543Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:22.980Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.858Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.906Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.943Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.030Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.065Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.113Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.507Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.586Z: Starting 5 workers in us-central1-a...
    Oct 27, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:38.918Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:20.845Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:47.882Z: Workers have started successfully.
    Oct 27, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:47.923Z: Workers have started successfully.
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.089Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.206Z: Cleaning up.
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.294Z: Stopping worker pool...
    Oct 27, 2021 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:49:38.626Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 12:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:49:38.674Z: Worker pool stopped.
    Oct 27, 2021 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_05_45_14-15978571298395169035 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 60c7d0f0-d613-44b6-98f2-970d0e76913d and timestamp: 2021-10-27T12:49:47.848000000Z:
                     Metric:                    Value:
                   read_time                     9.143
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 52.334 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/flyqjbe5yx4qs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2595

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2595/display/redirect?page=changes>

Changes:

[heejong] [BEAM-12978] Customizable dependency for Java external transform

[heejong] add ConfigT param to getDependencies method


------------------------------------------
[...truncated 352.20 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6bf5391d49e2e1a5a9437f0674ac4d43
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 6:47:20 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 6:47:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 6:47:22 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 6:47:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 6:47:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1125005737521853143.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6CejtyXlnz_x_I5eE5ffhxfTYe5KlPH8WMTSxJ6h4xk.jar
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ddb448d9b010f9b2534dcd51002aa98e499f9bd6c6f090c1892db5aa26425451> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3bRI2bAQ-bJTTc1RACqpjkmfm9bG8JDBiS21qiZCVFE.pb
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_23_47_35-6282463995481441555?project=apache-beam-testing
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_23_47_35-6282463995481441555
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_23_47_35-6282463995481441555
    Oct 27, 2021 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T06:47:38.747Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.067Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.746Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.776Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.807Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.881Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.905Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.937Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:45.259Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:45.336Z: Starting 5 workers in us-central1-a...
    Oct 27, 2021 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:04.879Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:28.879Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 6:48:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:57.013Z: Workers have started successfully.
    Oct 27, 2021 6:48:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:57.046Z: Workers have started successfully.
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.197Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.352Z: Cleaning up.
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.427Z: Stopping worker pool...
    Oct 27, 2021 6:51:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:51:53.315Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 6:51:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:51:53.351Z: Worker pool stopped.
    Oct 27, 2021 6:51:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_23_47_35-6282463995481441555 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9f474d3d-6267-4c5e-a9bf-cf1cdcbb4e41 and timestamp: 2021-10-27T06:51:58.748000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.579

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:51:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.049 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.557 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 29s
152 actionable tasks: 105 executed, 47 from cache

Publishing build scan...
https://gradle.com/s/uuldaalp4e464

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2594

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2594/display/redirect?page=changes>

Changes:

[melissapa] [BEAM-11758] Update basics page: Splittable DoFn

[melissapa] Address review feedback

[noreply] [BEAM-4149] Ensure that we always provide and require the worker id.

[noreply] [BEAM-13098] Fix translation of repeated TableRow fields (#15779)


------------------------------------------
[...truncated 340.56 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 509abcf44d00236d233812d1ca309722
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6688066615625574093.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8dZw05SraGBXmGFC-CUKDNtEF8079vcIZPpOvd2_RnA.jar
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 16dfe6f87483c41b4cf30f2a0e14ee270f5ac593ebf24499753c8b780b7a077a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ft_m-HSDxBtM8w8qDhTuJw9axZPr8kSZdTyLeAt6B3o.pb
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_17_45_20-11289818588678210841?project=apache-beam-testing
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_17_45_20-11289818588678210841
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_17_45_20-11289818588678210841
    Oct 27, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T00:45:23.701Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:31.587Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.486Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.571Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.597Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.677Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.730Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.766Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:33.312Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:33.405Z: Starting 5 workers in us-central1-c...
    Oct 27, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:38.847Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:14.152Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:40.065Z: Workers have started successfully.
    Oct 27, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:40.131Z: Workers have started successfully.
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.223Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.386Z: Cleaning up.
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.473Z: Stopping worker pool...
    Oct 27, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:49:25.623Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:49:25.671Z: Worker pool stopped.
    Oct 27, 2021 12:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_17_45_20-11289818588678210841 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 49065fb8-a289-4515-9290-47942996a32e and timestamp: 2021-10-27T00:49:33.068000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.649

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 29.193 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/wivncqkpu6f7y

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2593

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2593/display/redirect?page=changes>

Changes:

[baeminbo] [BEAM-6721] Set numShards dynamically for TextIO.write()

[Alexey Romanenko] [BEAM-13104] ParquetIO: SplitReadFn must read the whole block

[noreply] Change sql.Options to an interface under sqlx. (#15790)


------------------------------------------
[...truncated 348.98 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8a5effdddaa8746e2d7f7e13a45fb337
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 6:50:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 6:50:14 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 6:50:16 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 6:50:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1970546799]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@181648722]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 6:50:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 6:50:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 6:51:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 6:51:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 6:51:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2703572330249305303.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-d8IvSdqPlnGwG9XFFuETwgT-Iabdz_B4fI2mNGqeffc.jar
    Oct 26, 2021 6:51:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 5 seconds
    Oct 26, 2021 6:51:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 6:51:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash aa588c8de4c9395979f2e52247d03243ab91495f1ef528cb875824f3e1e337d4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qliMjeTJOVl58uUiR9AyQ6uRSV8e9SjLh1gk8-HjN9Q.pb
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 6:51:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_11_51_16-15819183002633272158?project=apache-beam-testing
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_11_51_16-15819183002633272158
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_11_51_16-15819183002633272158
    Oct 26, 2021 6:51:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T18:51:19.876Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:25.886Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.739Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.777Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.806Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.872Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.906Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.929Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:27.302Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:27.385Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 6:52:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:59.019Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:13.223Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:13.260Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 26, 2021 6:52:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:23.640Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:41.480Z: Workers have started successfully.
    Oct 26, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:41.507Z: Workers have started successfully.
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.612Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.752Z: Cleaning up.
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.821Z: Stopping worker pool...
    Oct 26, 2021 6:55:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:55:28.791Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 6:55:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:55:28.830Z: Worker pool stopped.
    Oct 26, 2021 6:55:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_11_51_16-15819183002633272158 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6128655d-a700-4ef8-87d6-ed21a7acecbd and timestamp: 2021-10-26T18:55:37.598000000Z:
                     Metric:                    Value:
                   read_time                     9.969
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:55:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.042 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 50.177 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/lvzmfvumwusb6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2592

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2592/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15744 from [BEAM-13072][Playground] Executor builder


------------------------------------------
[...truncated 337.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6050859197808693486.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZCR6r3qW-kGRdDEzDhelvP8fTXza1crsnuKBelhYVF4.jar
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 21f983668a6ff0d4dcf11159d9899512b6c9e357c3a7c7474d0cf207cb679cd3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IfmDZopv8NTc8RFZ2YmVErbJ41fDp8dHTQzyB8tnnNM.pb
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_05_45_04-12110504230520579641?project=apache-beam-testing
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_05_45_04-12110504230520579641
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_05_45_04-12110504230520579641
    Oct 26, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T12:45:08.122Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:21.224Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.028Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.063Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.102Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.181Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.211Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.243Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.559Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.656Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:43.306Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:11.749Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:36.527Z: Workers have started successfully.
    Oct 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:36.558Z: Workers have started successfully.
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:03.924Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:04.074Z: Cleaning up.
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:04.153Z: Stopping worker pool...
    Oct 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:49:29.906Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:49:29.944Z: Worker pool stopped.
    Oct 26, 2021 12:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_05_45_04-12110504230520579641 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 223e23a8-5da9-49c9-96c9-30c691bad098 and timestamp: 2021-10-26T12:49:36.359000000Z:
                     Metric:                    Value:
                   read_time                     7.315
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 49.525 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/omvofqq6jbqdq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2591

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2591/display/redirect>

Changes:


------------------------------------------
[...truncated 341.86 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4531258299563612406.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CwYI0P8cmgGqZfkdFPlfqEAKHfknuI4QklhixXRLHM8.jar
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cf8f18f4b20c815acad4a50152d9a8fd23593ca1985bdaec29729247d37e09cd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-z48Y9LIMgVrK1KUBUtmo_SNZPKGYW9rsKXKSR9N-Cc0.pb
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_23_45_20-16445457230774478233?project=apache-beam-testing
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_23_45_20-16445457230774478233
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_23_45_20-16445457230774478233
    Oct 26, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T06:45:24.783Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:33.539Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.223Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.260Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.290Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.347Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.376Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.408Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.745Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.822Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:02.618Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:20.189Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:46.364Z: Workers have started successfully.
    Oct 26, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:46.386Z: Workers have started successfully.
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:12.785Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:12.921Z: Cleaning up.
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:13.018Z: Stopping worker pool...
    Oct 26, 2021 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:49:45.107Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:49:45.143Z: Worker pool stopped.
    Oct 26, 2021 6:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_23_45_20-16445457230774478233 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4f5d7a15-90e5-4d9a-bf06-68c882807e58 and timestamp: 2021-10-26T06:49:51.269000000Z:
                     Metric:                    Value:
                   read_time                     8.179
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 47.408 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 100 executed, 52 from cache

Publishing build scan...
https://gradle.com/s/pp7uvuncuz4h6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2590

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2590/display/redirect?page=changes>

Changes:

[Luke Cwik] [BEAM-12522] Enable side inputs on all splittable DoFn execution time

[noreply] Merge pull request #15783 from [BEAM-13048] [Playground] Add shortcuts

[Luke Cwik] [BEAM-8543] Fix test filtering for Dataflow Runner V2 to exclude


------------------------------------------
[...truncated 341.95 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 12:45:18 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 12:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6884979175832345599.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vwvqpYLE-nloHPrqT09fVU1QXg8SNxyDQ8RqB4lQCgc.jar
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c8c4e1b924817efd5b4a031af9a4711c8626b1e36a1338edc79e48c8a2af012> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DIxOG5JIF-_VtKAxr5pHEchiax42oTOO3HnkjIoq8BI.pb
    Oct 26, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_17_45_32-14179158492439714256?project=apache-beam-testing
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_17_45_32-14179158492439714256
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_17_45_32-14179158492439714256
    Oct 26, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T00:45:35.505Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:45.322Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.052Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.092Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.118Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.206Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.232Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.259Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.616Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.671Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:10.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:30.068Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:54.870Z: Workers have started successfully.
    Oct 26, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:54.903Z: Workers have started successfully.
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:23.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:24.074Z: Cleaning up.
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:24.157Z: Stopping worker pool...
    Oct 26, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:49:48.338Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:49:48.375Z: Worker pool stopped.
    Oct 26, 2021 12:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_17_45_32-14179158492439714256 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): de667016-1e8a-4b6e-ba7d-f9af7602e370 and timestamp: 2021-10-26T00:49:53.529000000Z:
                     Metric:                    Value:
                   read_time                     8.328
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.531 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/pqpzuexkifwlw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2589

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2589/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13062][Playground]

[ilya.kozyrev] Add environment_service.go and structures for beam sdk, network envs,

[aydar.zaynutdinov] [BEAM-13062][Playground]

[noreply] Merge pull request #15772 from [BEAM-13032] [Playground] Implement the

[noreply] Merge pull request #15714 from [BEAM-13005] [Playground] Implement local

[noreply] Merge pull request #15770 from [BEAM-13095][Playground] Using working

[noreply] [BEAM-11758] Update basics page: Aggregation, Runner, UDF, Schema


------------------------------------------
[...truncated 338.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b1c7c34038d3084017a9c5ad5e385269
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 6:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1401828045232600418.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-77zOSxMWEXP8Jnii8Hti27KAyEopOUys2QswanR0rLc.jar
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b89b701b0d2a46bfd25cbb0ccd4998f85fdc9bbde3fe0053ba972ed7e382a693> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uJtwGw0qRr_SXLsMzUmY-F_cm73j_gBTupcu1-OCppM.pb
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_11_45_09-11740430234319184322?project=apache-beam-testing
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_11_45_09-11740430234319184322
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_11_45_09-11740430234319184322
    Oct 25, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T18:45:12.380Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.050Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.899Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.925Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.982Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.038Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.071Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.105Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.399Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.459Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:00.171Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:31.506Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:56.283Z: Workers have started successfully.
    Oct 25, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:56.311Z: Workers have started successfully.
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.183Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.311Z: Cleaning up.
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.368Z: Stopping worker pool...
    Oct 25, 2021 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:50:00.808Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:50:00.851Z: Worker pool stopped.
    Oct 25, 2021 6:50:08 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_11_45_09-11740430234319184322 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1a0b850a-389d-445d-b433-46fb67348710 and timestamp: 2021-10-25T18:50:08.662000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.789

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:50:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.085 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.083 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 18.164 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 49s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gn6ezsourdbjk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2588

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2588/display/redirect>

Changes:


------------------------------------------
[...truncated 338.59 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b1c7c34038d3084017a9c5ad5e385269
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 12:45:07 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4749405735551215320.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-udTe-prq1QvhM3PQ7SYUFamjEZblg_QiQQadPqcK1mk.jar
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ebee6c9f08ece616e8d91d481ccb11239b5897ddb83e34d0b5c8a84f3ea7cd0e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6-5snwjs5hbo2R1IHMsRI5tYl924PjTQtcioTz6nzQ4.pb
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_05_45_21-3164346719354293428?project=apache-beam-testing
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_05_45_21-3164346719354293428
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_05_45_21-3164346719354293428
    Oct 25, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T12:45:25.220Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:32.669Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.496Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.541Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.579Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.656Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.690Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.724Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:34.102Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:34.181Z: Starting 5 workers in us-central1-c...
    Oct 25, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:04.004Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:14.161Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:39.730Z: Workers have started successfully.
    Oct 25, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:39.751Z: Workers have started successfully.
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:08.787Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:08.931Z: Cleaning up.
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:09.025Z: Stopping worker pool...
    Oct 25, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:49:32.415Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:49:32.462Z: Worker pool stopped.
    Oct 25, 2021 12:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_05_45_21-3164346719354293428 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ce2c8767-afd2-4708-b36e-a06e5fcb9d0c and timestamp: 2021-10-25T12:49:42.320000000Z:
                     Metric:                    Value:
                   read_time                     8.531
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 40.562 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/s47txd5eoqqtk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2587

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2587/display/redirect?page=changes>

Changes:

[noreply] Fixing BigQueryIO request too big corner case for streaming inserts


------------------------------------------
[...truncated 342.70 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 6:45:09 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6462084005405693165.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KEBooeD5rYr-VXEXmytgYWlbKmkfk_nE2PlhT7MP5v8.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5ac93a787833e8fd9b9274c677e2148489e2b601ec4c6f3c7d18f83f4a1e1325> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Wsk6eHgz6P2bknTGd-IUhInitgHsTG88fRj4P0oeEyU.pb
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_23_45_21-3312190335979126018?project=apache-beam-testing
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_23_45_21-3312190335979126018
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_23_45_21-3312190335979126018
    Oct 25, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T06:45:24.558Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:30.887Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.671Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.709Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.735Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.810Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.843Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.867Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:32.246Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:32.321Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:54.437Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:11.304Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:11.339Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 25, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:21.608Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:47.181Z: Workers have started successfully.
    Oct 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:47.209Z: Workers have started successfully.
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:12.876Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:13.048Z: Cleaning up.
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:13.139Z: Stopping worker pool...
    Oct 25, 2021 6:49:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:49:33.636Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 6:49:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:49:33.671Z: Worker pool stopped.
    Oct 25, 2021 6:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_23_45_21-3312190335979126018 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 97be82fe-6fc4-4361-8b87-c083f4e9da9e and timestamp: 2021-10-25T06:49:39.273000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.18

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 35.046 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/w7kyn4zosmpgs

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2586

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2586/display/redirect>

Changes:


------------------------------------------
[...truncated 338.40 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2595792217425081203.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Svu3U0YG4soNQ-jJoQ7YB5zJckxCzgPiwdj6X9eyS3s.jar
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1c18c0992bea98f286e19255f784d1ea7a16a2d5bc10e8125d5a6694a20a4c0a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HBjAmSvqmPKG4ZJV94TR6noWotW8EOgSXVpmlKIKTAo.pb
    Oct 25, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_17_45_05-9734064161454404204?project=apache-beam-testing
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_17_45_05-9734064161454404204
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_17_45_05-9734064161454404204
    Oct 25, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T00:45:08.185Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:14.793Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.446Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.474Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.508Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.589Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.617Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.650Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.928Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.986Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:28.630Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:51.127Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:51.168Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 25, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:01.393Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:26.125Z: Workers have started successfully.
    Oct 25, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:26.150Z: Workers have started successfully.
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.588Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.748Z: Cleaning up.
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.836Z: Stopping worker pool...
    Oct 25, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:49:21.277Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:49:21.318Z: Worker pool stopped.
    Oct 25, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_17_45_05-9734064161454404204 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 600a77f3-8d89-4571-bb36-20a416520534 and timestamp: 2021-10-25T00:49:26.572000000Z:
                     Metric:                    Value:
                   read_time                     6.658
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.103 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mfpxpljom2t3g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2585

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2585/display/redirect>

Changes:


------------------------------------------
[...truncated 338.46 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2281831264933747915.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BFpAUBKkNYZEkFjs7iKxv0Rjtt-SZ_ChDCD2muD7Gbc.jar
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 46658f6a160e60bde29ee79ef6ba17502692c31d5eab264279671f06a43e72ec> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RmWPahYOYL3inuee9roXUCaSwx1eqyZCeWcfBqQ-cuw.pb
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_11_45_03-14601734037435446?project=apache-beam-testing
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_11_45_03-14601734037435446
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_11_45_03-14601734037435446
    Oct 24, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T18:45:06.736Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:12.335Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.106Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.144Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.161Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.221Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.249Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.281Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.592Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.670Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:45.511Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:58.191Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:27.415Z: Workers have started successfully.
    Oct 24, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:27.448Z: Workers have started successfully.
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.385Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.528Z: Cleaning up.
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.596Z: Stopping worker pool...
    Oct 24, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:49:22.080Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:49:22.142Z: Worker pool stopped.
    Oct 24, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_11_45_03-14601734037435446 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 72e4438f-eb5b-436f-9755-825bad0fccdc and timestamp: 2021-10-24T18:49:27.274000000Z:
                     Metric:                    Value:
                   read_time                     8.245
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 40.39 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ojtl77yvwijri

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2584

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2584/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3220223612139521038.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SxyqRG-T0_8658xNXHO5x0m0-2Y2aE5IySXuTWRZd1U.jar
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5c74b9531ad0d39b7fba4e188553428969bcd09b06af65e77b9677878684d51c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-XHS5UxrQ05t_uk4YhVNCiWm80JsGr2Xne5Z3h4aE1Rw.pb
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_05_45_11-198687276119188551?project=apache-beam-testing
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_05_45_11-198687276119188551
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_05_45_11-198687276119188551
    Oct 24, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T12:45:14.601Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.011Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.807Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.846Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.862Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.933Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.953Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.987Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:20.307Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:20.367Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:37.590Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:04.308Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:31.912Z: Workers have started successfully.
    Oct 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:31.931Z: Workers have started successfully.
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.477Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.626Z: Cleaning up.
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.698Z: Stopping worker pool...
    Oct 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:49:20.814Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:49:20.876Z: Worker pool stopped.
    Oct 24, 2021 12:49:25 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_05_45_11-198687276119188551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ea1fa4c-570f-4d4e-b2db-9bbe7caea725 and timestamp: 2021-10-24T12:49:25.876000000Z:
                     Metric:                    Value:
                   read_time                     7.363
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 32.174 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/52tcykkweyrcq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2583

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2583/display/redirect>

Changes:


------------------------------------------
[...truncated 336.90 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test986201900298641337.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OV5g-cowxO6zfmtIngBxzK3mHZp1HkEiHVK-HHn-m6g.jar
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a7e8d583e26229f5c31f178f1bfd3ad9b49b7da85d7b4842a8eaea89cf669895> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p-jVg-JiKfXDHxePG_062bSbfahde0hCqOrqic9mmJU.pb
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_23_45_02-13720345690466915280?project=apache-beam-testing
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_23_45_02-13720345690466915280
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_23_45_02-13720345690466915280
    Oct 24, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T06:45:06.087Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.007Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.851Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.912Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.945Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.004Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.030Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.054Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.370Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.433Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:23.092Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:58.694Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:25.353Z: Workers have started successfully.
    Oct 24, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:25.381Z: Workers have started successfully.
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:52.979Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:53.109Z: Cleaning up.
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:53.175Z: Stopping worker pool...
    Oct 24, 2021 6:49:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:49:07.374Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 6:49:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:49:07.415Z: Worker pool stopped.
    Oct 24, 2021 6:49:15 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_23_45_02-13720345690466915280 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f4e5badd-2a26-4279-ac69-b2de80cd8a86 and timestamp: 2021-10-24T06:49:15.304000000Z:
                     Metric:                    Value:
                   read_time                     8.898
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:49:15 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 28.931 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/t7eiql7zcystu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2582

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2582/display/redirect>

Changes:


------------------------------------------
[...truncated 338.52 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5620400807474782905.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PvVfJH29nzZezX6DdAD2fei6EHEtmTyWx_r0EaBp4Go.jar
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash eb265cd519ab6482788bd1b185f3d113b78efd4154495af0b3e70f392733a1fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6yZc1RmrZIJ4i9GxhfPRE7eO_UFUSVrws-cPOSczofs.pb
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_17_45_05-11425394296655645248?project=apache-beam-testing
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_17_45_05-11425394296655645248
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_17_45_05-11425394296655645248
    Oct 24, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T00:45:09.077Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:16.459Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.136Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.174Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.200Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.272Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.290Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.338Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.663Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.745Z: Starting 5 workers in us-central1-c...
    Oct 24, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:42.088Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:02.194Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:27.494Z: Workers have started successfully.
    Oct 24, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:27.517Z: Workers have started successfully.
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.310Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.473Z: Cleaning up.
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.538Z: Stopping worker pool...
    Oct 24, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:49:19.776Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:49:19.816Z: Worker pool stopped.
    Oct 24, 2021 12:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_17_45_05-11425394296655645248 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9b86f15a-194a-423a-ad72-a17e2abd1bfc and timestamp: 2021-10-24T00:49:25.589000000Z:
                     Metric:                    Value:
                   read_time                     7.565
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 36.703 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5tctckamenm2y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2581

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2581/display/redirect>

Changes:


------------------------------------------
[...truncated 338.44 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 6:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test916840746478523694.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G8FrI5xw5d-pHtGb7JunTIPm5peWp0lb7KMauDTCdJQ.jar
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 3f0c787d54a4d91ba4485b8b3cd5972d51a4a906dc520d9bd65ab29339b06bcc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Pwx4fVSk2RukSFuLPNWXLVGkqQbcUg2b1lqykzmwa8w.pb
    Oct 23, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_11_45_03-11784254790895893403?project=apache-beam-testing
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_11_45_03-11784254790895893403
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_11_45_03-11784254790895893403
    Oct 23, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T18:45:06.342Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:11.307Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.250Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.380Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.440Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.499Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.526Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.569Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.940Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:13.013Z: Starting 5 workers in us-central1-a...
    Oct 23, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:30.945Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:59.011Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:23.398Z: Workers have started successfully.
    Oct 23, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:23.431Z: Workers have started successfully.
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.319Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.454Z: Cleaning up.
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.528Z: Stopping worker pool...
    Oct 23, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:49:27.362Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:49:27.405Z: Worker pool stopped.
    Oct 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_11_45_03-11784254790895893403 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 659d2668-a9fb-4554-9f4a-4cf586c89f8c and timestamp: 2021-10-23T18:49:33.337000000Z:
                     Metric:                    Value:
                   read_time                     7.448
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 47.484 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/taemuhzyuhkx2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2580

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2580/display/redirect>

Changes:


------------------------------------------
[...truncated 337.45 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4153187230473989759.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aoauSMOjrgz76uA0qyPTW3eGuShWn-6wamd8LclOpBs.jar
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a1b414900d27cf86a14755f0bd70bf55819b25d4c7909d375e8d335496246469> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-obQUkA0nz4ahR1XwvXC_VYGbJdTHkJ03Xo0zVJYkZGk.pb
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_05_45_05-4998188572001341519?project=apache-beam-testing
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_05_45_05-4998188572001341519
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_05_45_05-4998188572001341519
    Oct 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T12:45:10.596Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:18.382Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.164Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.203Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.230Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.296Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.323Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.361Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.685Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.763Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:26.559Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:58.534Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:23.731Z: Workers have started successfully.
    Oct 23, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:23.765Z: Workers have started successfully.
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.507Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.802Z: Cleaning up.
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.953Z: Stopping worker pool...
    Oct 23, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:49:16.570Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:49:16.614Z: Worker pool stopped.
    Oct 23, 2021 12:49:23 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_05_45_05-4998188572001341519 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 573b20a4-b8f5-4b0b-bacd-3f87a72c7d3f and timestamp: 2021-10-23T12:49:23.171000000Z:
                     Metric:                    Value:
                   read_time                      9.04
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 36.406 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uoybre6u24wru

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2579

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2579/display/redirect>

Changes:


------------------------------------------
[...truncated 339.39 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1581369464775801829.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-M02kErQJir48ULTVIVE-RVq-luRtbY5wAFMczOrUSrc.jar
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0a0e2ff3f2cf81ec8d87d703247efad65dde481d4bcddc4882fdeacb31fe696b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Cg4v8_LPgeyNh9cDJH761l3eSB1LzdxIgv3qyzH-aWs.pb
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_23_45_07-6553272891410165921?project=apache-beam-testing
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_23_45_07-6553272891410165921
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_23_45_07-6553272891410165921
    Oct 23, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T06:45:10.401Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:16.222Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:16.999Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.026Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.068Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.138Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.164Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.197Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.526Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.581Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:39.982Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:57.381Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:22.025Z: Workers have started successfully.
    Oct 23, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:22.055Z: Workers have started successfully.
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.111Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.271Z: Cleaning up.
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.350Z: Stopping worker pool...
    Oct 23, 2021 6:49:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:49:10.457Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 6:49:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:49:10.497Z: Worker pool stopped.
    Oct 23, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_23_45_07-6553272891410165921 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eec3c578-b3c4-4a82-bf4d-b995c9b0a757 and timestamp: 2021-10-23T06:49:18.200000000Z:
                     Metric:                    Value:
                   read_time                    14.831
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 28.471 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xh4teytgmnutu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2578

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2578/display/redirect>

Changes:


------------------------------------------
[...truncated 347.17 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 12:45:23 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8172644966567795379.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5SKfHA7pbWfH_cncwRkL6M8c20m2p4DP1fGE9JcmAzg.jar
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash d92f24c97d541dcabcf2cc743fad5d70150cf6cb31f6b09a906823670a00d766> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2S8kyX1UHcq88sx0P61dcBUM9ssx9rCakGgjZwoA12Y.pb
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_17_45_37-15579832384745414300?project=apache-beam-testing
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_17_45_37-15579832384745414300
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_17_45_37-15579832384745414300
    Oct 23, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T00:45:40.148Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:47.695Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.359Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.401Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.430Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.505Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.529Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.550Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.857Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.938Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 12:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:16.620Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:29.690Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:55.074Z: Workers have started successfully.
    Oct 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:55.177Z: Workers have started successfully.
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.595Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.743Z: Cleaning up.
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.810Z: Stopping worker pool...
    Oct 23, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:49:52.198Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:49:52.253Z: Worker pool stopped.
    Oct 23, 2021 12:49:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_17_45_37-15579832384745414300 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 98e29aa3-13bd-47cb-a8ca-0fced4b93c6a and timestamp: 2021-10-23T00:49:57.976000000Z:
                     Metric:                    Value:
                   read_time                     9.382
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.931 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/7kndjw47yqcam

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2577

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2577/display/redirect>

Changes:


------------------------------------------
[...truncated 337.49 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4812659937698288179.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-HK_O8Kfv2E-xp0m8VU2_UJcCq6Xu1FjNajhvssOahtY.jar
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b1881ce480f204564b834418301c9652b6b8285691b48247a1a5efd3fff0e5a1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sYgc5IDyBFZLg0QYMByWUra4KFaRtIJHoaXv0__w5aE.pb
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_11_45_06-6418125828332159712?project=apache-beam-testing
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_11_45_06-6418125828332159712
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_11_45_06-6418125828332159712
    Oct 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T18:45:10.314Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:15.476Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.232Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.269Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.299Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.352Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.367Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.416Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.729Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.809Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:32.160Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:47.748Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:47.774Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 22, 2021 6:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:58.086Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:22.931Z: Workers have started successfully.
    Oct 22, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:22.954Z: Workers have started successfully.
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.248Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.371Z: Cleaning up.
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.450Z: Stopping worker pool...
    Oct 22, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:49:08.129Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:49:08.166Z: Worker pool stopped.
    Oct 22, 2021 6:49:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_11_45_06-6418125828332159712 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4c7fec99-b8ac-4e9e-a8a3-0c123523220f and timestamp: 2021-10-22T18:49:15.341000000Z:
                     Metric:                    Value:
                   read_time                      7.87
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:49:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 26.074 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 56s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kpieh6wjlnw4u

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2576

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2576/display/redirect>

Changes:


------------------------------------------
[...truncated 337.48 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6513860883471970675.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9wlE9F-L4j_o6_mGA0EzSy3tsPFuagq285rta2TZlUg.jar
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1e8ee19e98beceb002660fe15f08f010f3a28f9e31b126e08fcd569004082861> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ho7hnpi-zrACZg_hXwjwEPOij54xsSbgj81WkAQIKGE.pb
    Oct 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_05_45_09-5025220313817736867?project=apache-beam-testing
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_05_45_09-5025220313817736867
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_05_45_09-5025220313817736867
    Oct 22, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T12:45:12.402Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:17.223Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.129Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.159Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.186Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.259Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.287Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.310Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.621Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.697Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:43.231Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:52.955Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:52.981Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 22, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:03.242Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:27.608Z: Workers have started successfully.
    Oct 22, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:27.646Z: Workers have started successfully.
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:55.999Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:56.156Z: Cleaning up.
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:56.230Z: Stopping worker pool...
    Oct 22, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:49:16.111Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:49:16.188Z: Worker pool stopped.
    Oct 22, 2021 12:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_05_45_09-5025220313817736867 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 85f023b6-3ca7-48b7-a29a-efabb9941fe6 and timestamp: 2021-10-22T12:49:22.923000000Z:
                     Metric:                    Value:
                   read_time                     8.509
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 31.596 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 3s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/siwynfk4pt3xm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2575

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2575/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Create a multiplexer that sends Elements based upon


------------------------------------------
[...truncated 347.05 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 6:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 6:47:12 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 6:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 6:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 6:47:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 6:47:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2704467873999806469.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mKkyCuTrHG9wh9sVz5PggKRxtodra9D3IgEUIGZzVtM.jar
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 34bc0c8c9f4388c52ab475e7766b4710c8aa3abd82c64b5a2a8332c0271f07eb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-NLwMjJ9DiMUqtHXndmtHEMiqOr2CxktaKoMywCcfB-s.pb
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_23_47_28-9679235479846533736?project=apache-beam-testing
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_23_47_28-9679235479846533736
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_23_47_28-9679235479846533736
    Oct 22, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T06:47:31.466Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:36.666Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.354Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.383Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.422Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.474Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.500Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.526Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.845Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.913Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 6:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:45.411Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 6:48:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:18.726Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:43.486Z: Workers have started successfully.
    Oct 22, 2021 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:43.511Z: Workers have started successfully.
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.552Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.691Z: Cleaning up.
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.754Z: Stopping worker pool...
    Oct 22, 2021 6:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:51:43.519Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 6:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:51:43.558Z: Worker pool stopped.
    Oct 22, 2021 6:51:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_23_47_28-9679235479846533736 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b7b4b9d2-734b-4fe4-8daf-1f122bd0e0a0 and timestamp: 2021-10-22T06:51:50.197000000Z:
                     Metric:                    Value:
                   read_time                     9.081
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:51:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 43.255 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 32s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/ot5c4djvxs4qa

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2574

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2574/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13096] Double test timeout. (#15774)

[noreply] [BEAM-13019] Add `containsInAnyOrder` with matchers to the

[Daniel Oliveira] Avoiding read-only Go module cache in Gradle config.

[noreply] [BEAM-11758] Update basics page: Pipeline, PCollection, PTransform

[noreply] Test SetState addIfAbsent with no read (#15776)

[noreply] lazy creation of source splits for export-based ReadFromBigQuery

[noreply] [BEAM-11275] Support remote package download from remote filesystems in


------------------------------------------
[...truncated 351.17 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 599dc4d4ff281af8baf69fcffab042b0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 12:46:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 12:46:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 12:46:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 12:46:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3460714990117420804.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ODC0xluqdab86JEn1B721eorFDaytoXALNi2C6k4G18.jar
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f6f94733fa2e5056c93585100b93bd7f5d8cff4613ed7f3f07dca88b5a061231> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9vlHM_ouUFbJNYUQC5O9f12M_0YT7X8_B9yoi1oGEjE.pb
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_17_46_26-4390393042301835280?project=apache-beam-testing
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_17_46_26-4390393042301835280
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_17_46_26-4390393042301835280
    Oct 22, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T00:46:29.325Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:34.487Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.134Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.177Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.213Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.267Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.293Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.326Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.814Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.883Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:06.842Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:21.841Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:46.820Z: Workers have started successfully.
    Oct 22, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:46.855Z: Workers have started successfully.
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.430Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.614Z: Cleaning up.
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.688Z: Stopping worker pool...
    Oct 22, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:50:34.470Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:50:34.512Z: Worker pool stopped.
    Oct 22, 2021 12:50:40 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_17_46_26-4390393042301835280 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 676c8fd1-bd33-4cde-ae5a-c7858b22578f and timestamp: 2021-10-22T00:50:40.154000000Z:
                     Metric:                    Value:
                   read_time                     6.588
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:50:40 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 31.934 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 22s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/6nye6wefmwwmk

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2573

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2573/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15761 from [BEAM-13008] Create gradle tasks for the


------------------------------------------
[...truncated 337.25 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 20 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 20'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 20'
Successfully started process 'Gradle Test Executor 20'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 6:44:39 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 6:44:39 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 6:44:40 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 6:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1800141971]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@878870100]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 6:44:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 6:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3642596745687179173.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-uRkyJNgXMZYc24k-56QT74P95CK0zumxHkYMPOfLEsI.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 434bd57989a4804df7b209bed9a78276a311e902920425d95a583992789e8cc5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Q0vVeYmkgE33sgm-2aeCdqMR6QKSBCXZWlg5kniejMU.pb
    Oct 21, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_11_44_52-4963134360258444781?project=apache-beam-testing
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_11_44_52-4963134360258444781
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_11_44_52-4963134360258444781
    Oct 21, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T18:44:58.683Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:10.151Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.010Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.050Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.088Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.161Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.197Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.248Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.647Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.729Z: Starting 5 workers in us-central1-a...
    Oct 21, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:28.265Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:02.787Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:32.843Z: Workers have started successfully.
    Oct 21, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:32.883Z: Workers have started successfully.
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.075Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.232Z: Cleaning up.
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.323Z: Stopping worker pool...
    Oct 21, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:49:24.255Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:49:24.299Z: Worker pool stopped.
    Oct 21, 2021 6:49:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_11_44_52-4963134360258444781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fe052573-83ba-45ca-bf4a-21d532d4f6c6 and timestamp: 2021-10-21T18:49:32.568000000Z:
                     Metric:                    Value:
                   read_time                     7.605
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 20 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 57.376 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pr2d5zvbfw5fo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2572

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2572/display/redirect>

Changes:


------------------------------------------
[...truncated 338.71 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 12:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8379026470928085335.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TE4v05Iz6GNk3e0phHD1R9aQIyP7M1ykfvxs8yBmDKU.jar
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1b19fec33e65c08c8c8303eb061d3c9c2dd685a81eceaba550c5364f9dae2769> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Gxn-wz5lwIyMgwPrBh08nC3WhagezqulUMU2T52uJ2k.pb
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_05_45_13-14069682441316332178?project=apache-beam-testing
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_05_45_13-14069682441316332178
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_05_45_13-14069682441316332178
    Oct 21, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T12:45:20.013Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:30.351Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.177Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.219Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.247Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.329Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.358Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.392Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.728Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.782Z: Starting 5 workers in us-central1-c...
    Oct 21, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:40.351Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 12:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:17.465Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 12:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:41.541Z: Workers have started successfully.
    Oct 21, 2021 12:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:41.573Z: Workers have started successfully.
    Oct 21, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:11.775Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:13.356Z: Cleaning up.
    Oct 21, 2021 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:13.985Z: Stopping worker pool...
    Oct 21, 2021 12:49:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:49:50.461Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 12:49:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:49:50.495Z: Worker pool stopped.
    Oct 21, 2021 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_05_45_13-14069682441316332178 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ed88d379-c53c-4e1f-9e77-65aa610091e6 and timestamp: 2021-10-21T12:49:58.063000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.408

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 3.522 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/s5cxprhwp2xm4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2571

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2571/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13082] Re-use dataWriter buffer. (#15762)


------------------------------------------
[...truncated 340.96 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7834221037006516401.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-315tDq8MFlGE39qB9vOsf-KL5uqIG9cUkUdmALX2Dto.jar
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2d91f240d883e03925133702491be8ea0739bdf9a51e0a49766164d722a58df5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LZHyQNiD4DklEzcCSRvo6gc5vfmlHgpJdmFk1yKljfU.pb
    Oct 21, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_23_45_11-8989935288718978231?project=apache-beam-testing
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_23_45_11-8989935288718978231
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_23_45_11-8989935288718978231
    Oct 21, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T06:45:15.376Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.088Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 21, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.892Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.940Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.989Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.061Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.096Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.130Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.765Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.879Z: Starting 5 workers in us-central1-a...
    Oct 21, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:52.870Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:58.879Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:58.907Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:09.225Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:32.447Z: Workers have started successfully.
    Oct 21, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:32.478Z: Workers have started successfully.
    Oct 21, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:01.777Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:04.149Z: Cleaning up.
    Oct 21, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:05.375Z: Stopping worker pool...
    Oct 21, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:49:21.982Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:49:22.025Z: Worker pool stopped.
    Oct 21, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_23_45_11-8989935288718978231 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 594fe433-3119-4656-9f63-07797c0bb092 and timestamp: 2021-10-21T06:49:31.585000000Z:
                     Metric:                    Value:
                   read_time                     8.516
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.194 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 38.887 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dqhroygbgfunk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2570

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2570/display/redirect>

Changes:


------------------------------------------
[...truncated 338.98 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2124775571116606532.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nterUiI2-QH3mHeIdZAyFwhgUF_9uFKQqGWbkF4Fxo8.jar
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 669af0c6eae15c39bbb14ec0f8e2a0e04a606d2b0f52ef42011992ae747c0303> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ZprwxurhXDm7sU7A-OKg4EpgbSsPUu9CARmSrnR8AwM.pb
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_17_45_02-2472126720864607938?project=apache-beam-testing
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_17_45_02-2472126720864607938
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_17_45_02-2472126720864607938
    Oct 21, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T00:45:06.002Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:14.206Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.016Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.210Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.368Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.546Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.588Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.623Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.984Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:16.063Z: Starting 5 workers in us-central1-c...
    Oct 21, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:24.238Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:00.047Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:26.516Z: Workers have started successfully.
    Oct 21, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:26.534Z: Workers have started successfully.
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.310Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.504Z: Cleaning up.
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.585Z: Stopping worker pool...
    Oct 21, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:49:29.191Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:49:29.226Z: Worker pool stopped.
    Oct 21, 2021 12:49:36 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_17_45_02-2472126720864607938 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5fcbf8e9-5380-48fa-ae42-217760f9e046 and timestamp: 2021-10-21T00:49:36.096000000Z:
                     Metric:                    Value:
                   read_time                     7.707
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.045 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 49.945 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/esybkskpsgodm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2569

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2569/display/redirect?page=changes>

Changes:

[noreply] Add -XX:+AlwaysActAsServerClassMachine to Java SDK container

[moritz] adhoc: Minor update to flink runner docs

[noreply] [BEAM-11087] Add default WindowMappingFn from Main to Side Input


------------------------------------------
[...truncated 337.20 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4953441860378257756.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wwuBlSm9n4Pv5-JlAPhgc-0Ov7i4ymlRWbLFhSEKbdQ.jar
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c23bd765d27c9d90d242d2d6ff7805807bf512b1874a563c72322a3ee8788413> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wjvXZdJ8nZDSQtLW_3gFgHv1ErGHSlY8cjIqPuh4hBM.pb
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_11_45_08-3573852219966413355?project=apache-beam-testing
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_11_45_08-3573852219966413355
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_11_45_08-3573852219966413355
    Oct 20, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T18:45:12.135Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:17.818Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.620Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.653Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.688Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.764Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.875Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.933Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:19.340Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:19.406Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:30.548Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:03.797Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:29.177Z: Workers have started successfully.
    Oct 20, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:29.219Z: Workers have started successfully.
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.372Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.503Z: Cleaning up.
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.573Z: Stopping worker pool...
    Oct 20, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:49:23.303Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:49:23.341Z: Worker pool stopped.
    Oct 20, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_11_45_08-3573852219966413355 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 81ac351d-f904-4eb7-840f-c7e588ede68e and timestamp: 2021-10-20T18:49:30.690000000Z:
                     Metric:                    Value:
                   read_time                     7.473
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 39.801 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wbdln2o2wfpai

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2568

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2568/display/redirect?page=changes>

Changes:

[egalpin] [BEAM-10990] Adds response filtering for ElasticsearchIO

[egalpin] [BEAM-5172] Tries to reduce ES uTest flakiness


------------------------------------------
[...truncated 339.62 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6611324795219753643.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-r046PcstO7Q5G2sH6wsoNmJRJf1URpfZPijXmT13LZ0.jar
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash fc6bfe19c5e9bd96d5367a5d8a9ccea238fc0cfddbd0e3e45dc8622d758b0176> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_Gv-GcXpvZbVNnpdipzOojj8DP3b0OPkXchiLXWLAXY.pb
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_05_45_08-10224540884455294911?project=apache-beam-testing
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_05_45_08-10224540884455294911
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_05_45_08-10224540884455294911
    Oct 20, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T12:45:13.695Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:21.898Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.642Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.673Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.690Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.751Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.786Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.821Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:23.167Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:23.241Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:54.770Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:05.246Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:05.275Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 20, 2021 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:15.532Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:39.173Z: Workers have started successfully.
    Oct 20, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:39.241Z: Workers have started successfully.
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.657Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.777Z: Cleaning up.
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.854Z: Stopping worker pool...
    Oct 20, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:49:26.793Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:49:26.834Z: Worker pool stopped.
    Oct 20, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_05_45_08-10224540884455294911 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6591de82-0db9-4c10-bdc8-e23aa36d9e57 and timestamp: 2021-10-20T12:49:37.334000000Z:
                     Metric:                    Value:
                   read_time                      8.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 46.337 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mnrkfgsl3g37q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2567

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2567/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13079] Updates cross-language transform URNs to use the new


------------------------------------------
[...truncated 340.51 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8917328065085856496.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BUzLisP-f9L9NOQlKVRgwG3C7mUCwosJYwR2s31Vci8.jar
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 234003c919b874b18128bc0f374052d0d43234de3ab0a1f23af8a2f3be09eb26> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-I0ADyRm4dLGBKLwPN0BS0NQyNN46sKHyOvii874J6yY.pb
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_23_45_26-13242410420494433170?project=apache-beam-testing
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_23_45_26-13242410420494433170
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_23_45_26-13242410420494433170
    Oct 20, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T06:45:29.994Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:36.462Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.203Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.234Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.266Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.342Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.360Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.388Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.724Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.802Z: Starting 5 workers in us-central1-c...
    Oct 20, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:41.426Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:17.084Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:45.152Z: Workers have started successfully.
    Oct 20, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:45.183Z: Workers have started successfully.
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.341Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.474Z: Cleaning up.
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.552Z: Stopping worker pool...
    Oct 20, 2021 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:49:41.576Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:49:41.613Z: Worker pool stopped.
    Oct 20, 2021 6:49:49 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_23_45_26-13242410420494433170 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a633df80-3971-4763-a09f-c53ef755a0d5 and timestamp: 2021-10-20T06:49:49.886000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.396

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.567 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/2fdnoeb542y6s

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2566

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2566/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM_13077][Playground]

[Kyle Weaver] [BEAM-13055] Use unshallow clone to create PR.


------------------------------------------
[...truncated 337.30 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 22 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 1ee7405c9139a4bb4156d477fce72840
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 22'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 22'
Successfully started process 'Gradle Test Executor 22'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 12:44:37 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 12:44:38 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 12:44:38 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 12:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 12:44:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2272615731597150945.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kYWMcOTm0XIsEWIM4Ol1SBRjQnmFat_Wj2tv4KZ1bZk.jar
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b9b5321a17b2bc66c6b0050faf51815675d5f9c28f027f528ee5b77879668813> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ubUyGheyvGbGsAUPr1GBVnXV-cKPAn9SjuW3eHlmiBM.pb
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_17_44_50-3602477040614239517?project=apache-beam-testing
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_17_44_50-3602477040614239517
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_17_44_50-3602477040614239517
    Oct 20, 2021 12:44:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T00:44:53.632Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:44:59.455Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.180Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.218Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.236Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.320Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.358Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.401Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.776Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.857Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:08.920Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:45.969Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:13.169Z: Workers have started successfully.
    Oct 20, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:13.196Z: Workers have started successfully.
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.679Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.829Z: Cleaning up.
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.918Z: Stopping worker pool...
    Oct 20, 2021 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:49:12.396Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:49:12.441Z: Worker pool stopped.
    Oct 20, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_17_44_50-3602477040614239517 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f58ef3f3-1bed-491e-8093-33f3c6ffb2d4 and timestamp: 2021-10-20T00:49:21.122000000Z:
                     Metric:                    Value:
                   read_time                     9.419
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:49:22 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 22 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 48.822 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wl436vf63hb4o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2565

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2565/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-13042] Prevent unexpected blocking in

[ilya.kozyrev] Implement initial server with grpcweb wrapper

[noreply] [BEAM-13068] Add a SQL API in Beam Go SDK (#15746)


------------------------------------------
[...truncated 337.69 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 1ee7405c9139a4bb4156d477fce72840
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5615582546243157905.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-IcjEkvDtPEUdA2JV050sKrYn2pTUlgW6qSHEUJUzJus.jar
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 87a4e79e7a021c54426af1539bbc715d41337c6f069592b50b02e128e3bc45aa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h6TnnnoCHFRCavFTm7xxXUEzfG8GlZK1CwLhKOO8Rao.pb
    Oct 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_11_45_04-836460464704131324?project=apache-beam-testing
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_11_45_04-836460464704131324
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_11_45_04-836460464704131324
    Oct 19, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T18:45:07.328Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:14.609Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.449Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.490Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.517Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.596Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.629Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.660Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.985Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:16.057Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:30.759Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:03.527Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:29.563Z: Workers have started successfully.
    Oct 19, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:29.584Z: Workers have started successfully.
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.056Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.252Z: Cleaning up.
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.327Z: Stopping worker pool...
    Oct 19, 2021 6:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:49:20.897Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:49:20.946Z: Worker pool stopped.
    Oct 19, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_11_45_04-836460464704131324 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bc91b89d-1261-499a-9050-fbed9b644e5c and timestamp: 2021-10-19T18:49:27.291000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.843

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 39.784 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/jlwvg75yr6uju

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2564

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2564/display/redirect>

Changes:


------------------------------------------
[...truncated 337.93 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 12:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2748212382736978675.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-N8KoSoQtT6vbUsWpaoEOs8e_cJzVLbDARn6DKF-2x1E.jar
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 42cfe120303cc00951dccd9a04c9e10967a88cea4f7c24d6bcd6bf9178ef3278> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Qs_hIDA8wAlR3M2aBMnhCWeojOpPfCTWvNa_kXjvMng.pb
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_05_45_18-12249500119034946618?project=apache-beam-testing
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_05_45_18-12249500119034946618
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_05_45_18-12249500119034946618
    Oct 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T12:45:21.321Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:28.444Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.255Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.303Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.339Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.410Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.436Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.471Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.825Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.894Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:45.102Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:10.938Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:37.268Z: Workers have started successfully.
    Oct 19, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:37.300Z: Workers have started successfully.
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.182Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.340Z: Cleaning up.
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.430Z: Stopping worker pool...
    Oct 19, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:49:26.434Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:49:26.476Z: Worker pool stopped.
    Oct 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_05_45_18-12249500119034946618 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8dcd84d4-bc07-4c2d-88e0-67659763516c and timestamp: 2021-10-19T12:49:31.637000000Z:
                     Metric:                    Value:
                   read_time                     9.295
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 33.69 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/e2lhsypzg2bbo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2563

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2563/display/redirect>

Changes:


------------------------------------------
[...truncated 349.53 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-exQY3haih5oBrslwMoodif_IANoNjE91ELYD1GlFdlE.jar
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1336539546458498927.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PLLAMZcJ66Bz2b5k1BjEqH_pF62D-o_-W-mc-rvASdo.jar
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 1581248cac0dc35037e45fc0251d1382f50dcbd70106c40d4e7377f33a1e3086> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FYEkjKwNw1A35F_AJR0TgvUNy9cBBsQNTnN38zoeMIY.pb
    Oct 19, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_23_45_38-3754304436880156814?project=apache-beam-testing
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_23_45_38-3754304436880156814
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_23_45_38-3754304436880156814
    Oct 19, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T06:45:41.418Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.006Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.920Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.965Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.003Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.082Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.148Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.517Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.594Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:46:03.181Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:46:37.326Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:02.960Z: Workers have started successfully.
    Oct 19, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:02.988Z: Workers have started successfully.
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.188Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.342Z: Cleaning up.
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.456Z: Stopping worker pool...
    Oct 19, 2021 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:49:54.916Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:49:54.965Z: Worker pool stopped.
    Oct 19, 2021 6:50:01 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_23_45_38-3754304436880156814 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2b6a725e-98f9-47ff-9a19-5be8cbcbcb5d and timestamp: 2021-10-19T06:50:01.461000000Z:
                     Metric:                    Value:
                   read_time                     8.015
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:50:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 42.246 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 41s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/cvxvazwg3icq2

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2562

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2562/display/redirect?page=changes>

Changes:

[zhoufek] [BEAM-9487] Multiple Trigger.may_lose_data fixes

[zhoufek] [BEAM-9487] Remove CONDITION_NOT_GUARANTEED as potential data loss

[zhoufek] [BEAM-9487] Do AfterAny, AfterAll, and AfterEach checks properly (i.e.

[zhoufek] [BEAM-9487] Remove unused import

[zhoufek] [BEAM-9487] Reintroduce flag but do not use it

[zhoufek] [BEAM-9487] Add test that shows AfterCount finishing

[zhoufek] [BEAM-9487] Make _ParallelTriggerFn.may_finish clearer

[Robert Bradshaw] Revert "Merge pull request #15441 from [BEAM-8823] Make FnApiRunner work

[Robert Bradshaw] [BEAM-13040] Add some test cases enforcing side input waiting.

[Robert Bradshaw] lint

[noreply] [BEAM-13068] Add xlangx.DecodeStructPayload (#15741)

[Luke Cwik] [BEAM-13015] Implement a simplified cancellable blocking queue with


------------------------------------------
[...truncated 341.09 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 12:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5524549753855384375.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sWNtJeWWpKwoUNQXkV8GLzKESmhJCO3dP3Ug-Pb0zgk.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-daLf2p_s1EbljV6ZoB5Pwlvau5Epx8e-pjTPx9ZHN24.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9dd18d5d81f3e7c52f74ea0adaed28e9f4b1ea92aa14ccde78496ecb379f0175> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ndGNXYHz58UvdOoK2u0o6fSx6pKqFMzeeEluyzefAXU.pb
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_17_45_28-6254941589366977919?project=apache-beam-testing
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_17_45_28-6254941589366977919
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_17_45_28-6254941589366977919
    Oct 19, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T00:45:32.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:40.450Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.080Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.545Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.749Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.237Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.338Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.413Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.831Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.920Z: Starting 5 workers in us-central1-a...
    Oct 19, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:00.304Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:27.089Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:27.129Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 19, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:37.417Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:59.043Z: Workers have started successfully.
    Oct 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:59.308Z: Workers have started successfully.
    Oct 19, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.386Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.543Z: Cleaning up.
    Oct 19, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.651Z: Stopping worker pool...
    Oct 19, 2021 12:49:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:49:50.191Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 12:49:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:49:50.240Z: Worker pool stopped.
    Oct 19, 2021 12:49:56 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_17_45_28-6254941589366977919 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da72475d-6728-4bfb-bd77-fd47fecd2590 and timestamp: 2021-10-19T00:49:56.555000000Z:
                     Metric:                    Value:
                   read_time                     7.971
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:49:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 46.237 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/avqtonc5gtiia

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2561

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2561/display/redirect?page=changes>

Changes:

[brachipa] [BEAM-12393] sql support for Zeta Sql

[aydar.zaynutdinov] [BEAM-12988] [Playground]

[brachipa] [BEAM-12393] package private

[brachipa] [BEAM-12393] returning more generic interface

[noreply] [BEAM-13066] Produce abstract iterables from IterableCoder. (#15662)

[noreply] [BEAM-11936] Fix some errorprone warnings (#15648)


------------------------------------------
[...truncated 354.08 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c03513f16af6f9c0fa78bc1eceeb0a94
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 6:46:32 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 6:46:33 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 6:46:34 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 6:46:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 6:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-34H-ZQEas8ae8kKq4Q82eTMbi9XxDPW_7pDdqK04NMg.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6858594797609982854.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jv2dbNcsYyYrcJ-5GARi3GxcrCYK91c1ABn-2Mk3M38.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-ZlOfBnuV2qb30CsEbUAjwyScGcyyIS3WwPQrSSIXJqc.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-dF2DqJY8lyfxiqSRpdTBaZqBPtzja4rAbcld7GOsX0U.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-OSJn6rDYScnOqVtjqjWjn4keTL2V-mBSDXbhyjSfST0.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2a468aeff22c4cf10963abd01eab003b8a9f0af434f6a50a8bff2ff8d2d58efd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KkaK7_IsTPEJY6vQHqsAO4qfCvQ09qUKi_8v-NLVjv0.pb
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_11_46_47-1909107604599465188?project=apache-beam-testing
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_11_46_47-1909107604599465188
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_11_46_47-1909107604599465188
    Oct 18, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T18:46:51.177Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.204Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.931Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.975Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.999Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.084Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.117Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.145Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.496Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.567Z: Starting 5 workers in us-central1-a...
    Oct 18, 2021 6:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:15.389Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 6:47:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:49.045Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:13.669Z: Workers have started successfully.
    Oct 18, 2021 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:13.700Z: Workers have started successfully.
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.589Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.749Z: Cleaning up.
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.839Z: Stopping worker pool...
    Oct 18, 2021 6:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:51:03.857Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 6:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:51:03.927Z: Worker pool stopped.
    Oct 18, 2021 6:51:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_11_46_47-1909107604599465188 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 55134f57-78db-4fce-8df8-b23b3e6410d8 and timestamp: 2021-10-18T18:51:09.647000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     10.67

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:51:10 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 41.631 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 50s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/2bazexoimnpdu

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2560

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2560/display/redirect>

Changes:


------------------------------------------
[...truncated 338.76 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5065385169657750161.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Q1xF6P6trhpdpeXV1hgArD3Gjfa-CT8z1llRph_VF9Y.jar
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9ca872d6ea340413b1bc05ce682bcc9789af752e1a9c23664d951a3dfb88a3b4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-nKhy1uo0BBOxvAXOaCvMl4mvdS4anCNmTZUaPfuIo7Q.pb
    Oct 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_05_45_12-4118238192366139464?project=apache-beam-testing
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_05_45_12-4118238192366139464
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_05_45_12-4118238192366139464
    Oct 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T12:45:15.327Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.048Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.860Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.891Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.927Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.012Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.050Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.074Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.382Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.459Z: Starting 5 workers in us-central1-a...
    Oct 18, 2021 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:35.808Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:12.151Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:36.938Z: Workers have started successfully.
    Oct 18, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:36.997Z: Workers have started successfully.
    Oct 18, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:03.881Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:04.036Z: Cleaning up.
    Oct 18, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:04.106Z: Stopping worker pool...
    Oct 18, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:49:28.061Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:49:28.126Z: Worker pool stopped.
    Oct 18, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_05_45_12-4118238192366139464 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 38c408ed-d0af-4bda-8f12-97716f3a3d3a and timestamp: 2021-10-18T12:49:34.644000000Z:
                     Metric:                    Value:
                   read_time                     8.926
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 40.903 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6oq3b3uiojn72

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2559

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2559/display/redirect>

Changes:


------------------------------------------
[...truncated 338.62 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8892862234938683687.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-A256R58UDuU7T_Kj44mFhRwdy1BXOsFu_k7mnAkzcZk.jar
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash aa0153db57e296b66d16585844a48039941dd33bdc2fc1c5bce3c30c04a28154> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qgFT21filrZtFlhYRKSAOZQd0zvcL8HFvOPDDASigVQ.pb
    Oct 18, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_23_45_08-14910980539235606898?project=apache-beam-testing
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_23_45_08-14910980539235606898
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_23_45_08-14910980539235606898
    Oct 18, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T06:45:11.304Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.006Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.769Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.809Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.845Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.919Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.946Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.977Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:19.311Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:19.387Z: Starting 5 workers in us-central1-c...
    Oct 18, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:45.449Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:04.869Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:30.375Z: Workers have started successfully.
    Oct 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:30.401Z: Workers have started successfully.
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:04.999Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:05.181Z: Cleaning up.
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:05.256Z: Stopping worker pool...
    Oct 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:49:26.293Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:49:26.339Z: Worker pool stopped.
    Oct 18, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_23_45_08-14910980539235606898 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e67691e8-5057-4838-b5d1-778f5a8004fa and timestamp: 2021-10-18T06:49:31.530000000Z:
                     Metric:                    Value:
                   read_time                    12.491
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.191 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ywwlwlsby576c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2558

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2558/display/redirect>

Changes:


------------------------------------------
[...truncated 338.49 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 12:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5680631000516002864.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2G4uw-dZrRoR_rTkYc7rieGKftzN7GNqYAUdtkYroFY.jar
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ce352ef078ff8045b673d2822f368fa5fa49da250362606542c96a52f9f46f2b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zjUu8Hj_gEW2c9KCLzaPpfpJ2iUDYmBlQslqUvn0bys.pb
    Oct 18, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_17_45_06-1251233564020434467?project=apache-beam-testing
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_17_45_06-1251233564020434467
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_17_45_06-1251233564020434467
    Oct 18, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T00:45:09.322Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.109Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.938Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.994Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.025Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.087Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.147Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.486Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.559Z: Starting 5 workers in us-central1-c...
    Oct 18, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:48.035Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:56.162Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:56.192Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 18, 2021 12:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:06.408Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:32.350Z: Workers have started successfully.
    Oct 18, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:32.386Z: Workers have started successfully.
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.404Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.532Z: Cleaning up.
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.612Z: Stopping worker pool...
    Oct 18, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:49:26.526Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:49:26.575Z: Worker pool stopped.
    Oct 18, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_17_45_06-1251233564020434467 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b8a4cc54-75a1-47f8-bc06-00b9f08bd089 and timestamp: 2021-10-18T00:49:32.822000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.391

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.589 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/zmdgepc42wzqa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2557

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2557/display/redirect>

Changes:


------------------------------------------
[...truncated 340.49 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 6:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4056923343965430316.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B0X1LLZMnbhtJzTA6-fv34HCYim7n64rltJa-y72k7M.jar
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ae9549bbcc71ca5fb44e8a223a6b2123d5f67b55f60a8d4aa5583e7cf1745caf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rpVJu8xxyl-0TooiOmshI9X2e1X2Co1KpVg-fPF0XK8.pb
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_11_45_16-3089658574741920063?project=apache-beam-testing
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_11_45_16-3089658574741920063
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_11_45_16-3089658574741920063
    Oct 17, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T18:45:19.962Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:26.795Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.599Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.636Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.663Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.737Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.764Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.797Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:28.173Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:28.239Z: Starting 5 workers in us-central1-c...
    Oct 17, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:55.242Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:58.354Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:58.373Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 17, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:08.572Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:34.664Z: Workers have started successfully.
    Oct 17, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:34.694Z: Workers have started successfully.
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:01.765Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:01.913Z: Cleaning up.
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:02.000Z: Stopping worker pool...
    Oct 17, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:49:31.669Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:49:31.739Z: Worker pool stopped.
    Oct 17, 2021 6:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_11_45_16-3089658574741920063 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a961e5dc-9bad-41ec-a9b7-718b9f381126 and timestamp: 2021-10-17T18:49:36.846000000Z:
                     Metric:                    Value:
                   read_time                     6.923
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 39.023 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2vc7fnbuh3fsq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2556

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2556/display/redirect>

Changes:


------------------------------------------
[...truncated 340.72 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test715673712848259339.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w2RvxqIYtUlLz5K8azoUcEwPo_LrDwBUyb6Tgb8CWb0.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0ea9eabf35e01b6471399348b815299966353628021fb73a6bf6f86584a4375a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DqnqvzXgG2RxOZNIuBUpmWY1NigCH7c6a_b4ZYSkN1o.pb
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_05_45_09-3941338970109271771?project=apache-beam-testing
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_05_45_09-3941338970109271771
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_05_45_09-3941338970109271771
    Oct 17, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T12:45:12.127Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:16.317Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.002Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.034Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.066Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.137Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.170Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.204Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.545Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.633Z: Starting 5 workers in us-central1-a...
    Oct 17, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:51.173Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:57.847Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:23.375Z: Workers have started successfully.
    Oct 17, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:23.410Z: Workers have started successfully.
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.020Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.164Z: Cleaning up.
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.234Z: Stopping worker pool...
    Oct 17, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:49:20.120Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:49:20.160Z: Worker pool stopped.
    Oct 17, 2021 12:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_05_45_09-3941338970109271771 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f94ac8da-abb2-444d-b39c-a5140cf80fd8 and timestamp: 2021-10-17T12:49:26.507000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.328

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 36.046 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/y7ehflaolprhw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2555

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2555/display/redirect>

Changes:


------------------------------------------
[...truncated 337.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2070319809089714412.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EazQ446EH9zndrKGp5LYpQ9Vuky9QPFrGxV328IP3Gs.jar
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 20a042a642470c1da57d998960ad9220f63de8f120f861eb94833e4ce6cbb934> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IKBCpkJHDB2lfZmJYK2SIPY96PEg-GHrlIM-TObLuTQ.pb
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_23_45_03-4315291018268110247?project=apache-beam-testing
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_23_45_03-4315291018268110247
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_23_45_03-4315291018268110247
    Oct 17, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T06:45:06.685Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:11.657Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.336Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.375Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.402Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.474Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.504Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.535Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.853Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.929Z: Starting 5 workers in us-central1-a...
    Oct 17, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:33.166Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:02.299Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:28.278Z: Workers have started successfully.
    Oct 17, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:28.296Z: Workers have started successfully.
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.053Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.175Z: Cleaning up.
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.236Z: Stopping worker pool...
    Oct 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:49:21.033Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:49:21.096Z: Worker pool stopped.
    Oct 17, 2021 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_23_45_03-4315291018268110247 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d7b200d2-f66a-4d86-9fa6-28e8895eee2a and timestamp: 2021-10-17T06:49:27.094000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.561

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 40.391 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3h5xigpexa6ps

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2554

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2554/display/redirect>

Changes:


------------------------------------------
[...truncated 338.16 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 12:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5342571152511254611.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-x-euN8NcAy-N2QJs3-2aWS3Ke2dupAD1njFMfoFzi1g.jar
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 8 seconds
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash ac1e90e35cace738d00fab63547968e56e67b16246221a414a07bfdbf071cacd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rB6Q41ys5zjQD6tjVHlo5W5nsWJGIhpBSge_2_Bxys0.pb
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_17_45_16-15547595533803983167?project=apache-beam-testing
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_17_45_16-15547595533803983167
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_17_45_16-15547595533803983167
    Oct 17, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T00:45:19.809Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:26.746Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.417Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.445Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.472Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.534Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.559Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.593Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.915Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.988Z: Starting 5 workers in us-central1-c...
    Oct 17, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:50.914Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:08.577Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:33.476Z: Workers have started successfully.
    Oct 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:33.508Z: Workers have started successfully.
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:02.998Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:03.165Z: Cleaning up.
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:03.233Z: Stopping worker pool...
    Oct 17, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:49:23.569Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:49:23.607Z: Worker pool stopped.
    Oct 17, 2021 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_17_45_16-15547595533803983167 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 157f5288-fe04-4469-97ee-e73b3a8a5c31 and timestamp: 2021-10-17T00:49:31.366000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.031

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 41.735 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ibj2j4nen6gfk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2553

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2553/display/redirect>

Changes:


------------------------------------------
[...truncated 338.07 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3745694269218418822.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-elpz0i77A0X_49cCF8gW-d9GwsMCZiSN5Yx8c11mNvE.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c558ddfb8756d38e333978451b0b2ecd14c519c27619947106499cca36a21012> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xVjd-4dW044zOXhFGwsuzRTFGcJ2GZRxBkmcyjaiEBI.pb
    Oct 16, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_11_45_05-962833022890362814?project=apache-beam-testing
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_11_45_05-962833022890362814
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_11_45_05-962833022890362814
    Oct 16, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T18:45:08.362Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.150Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.835Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.875Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.902Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.970Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.997Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.025Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.372Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.446Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:45.134Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:00.697Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:27.241Z: Workers have started successfully.
    Oct 16, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:27.270Z: Workers have started successfully.
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.486Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.617Z: Cleaning up.
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.686Z: Stopping worker pool...
    Oct 16, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:49:16.256Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:49:16.307Z: Worker pool stopped.
    Oct 16, 2021 6:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_11_45_05-962833022890362814 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 63b0efc2-578d-4b47-9b4e-abae83586a70 and timestamp: 2021-10-16T18:49:22.663000000Z:
                     Metric:                    Value:
                   read_time                     7.786
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 34.471 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5pjmq2b7bkqv2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2552

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2552/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1995732283690668423.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UDPlafVzdYkMUvFPq8_F3akXpg81-C-VwyxJtMu2Y3c.jar
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash daecd3ec334425a4d48792ac042cf0b440e863564eff68683f7c36b56eae6fbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2uzT7DNEJaTUh5KsBCzwtEDoY1ZO_2hoP3w2tW6ub7w.pb
    Oct 16, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_05_45_11-2778178894381701195?project=apache-beam-testing
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_05_45_11-2778178894381701195
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_05_45_11-2778178894381701195
    Oct 16, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T12:45:14.504Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:19.246Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.029Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.064Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.099Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.162Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.190Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.237Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.537Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.615Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:28.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:05.544Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:05.569Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 16, 2021 12:46:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:26.003Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:30.809Z: Workers have started successfully.
    Oct 16, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:30.838Z: Workers have started successfully.
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.047Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.184Z: Cleaning up.
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.256Z: Stopping worker pool...
    Oct 16, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:49:17.920Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:49:17.960Z: Worker pool stopped.
    Oct 16, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_05_45_11-2778178894381701195 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3fcaf61d-caa5-473f-aa3d-d41d3e172c40 and timestamp: 2021-10-16T12:49:25.011000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.134

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:49:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 33.811 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uvf44a3eq2epm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2551

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2551/display/redirect>

Changes:


------------------------------------------
[...truncated 348.03 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 6:45:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4448712973903484506.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0iFCl6Isq1fYgFcb7h8hM8DXlMBtc_28WF30CGjHTEA.jar
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9a2ab7d2a492dc3f5bc88924fed98a4f610f11ea82517b5974b2eb6673ae8e8a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-miq30qSS3D9byIkk_tmKT2EPEeqCUXtZdLLrZnOujoo.pb
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_23_45_39-2842250562811226881?project=apache-beam-testing
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_23_45_39-2842250562811226881
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_23_45_39-2842250562811226881
    Oct 16, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T06:45:42.475Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:48.360Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.189Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.245Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.275Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.345Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.373Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.403Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.738Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.817Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:16.708Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:35.185Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:35.213Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 16, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:55.740Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:01.500Z: Workers have started successfully.
    Oct 16, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:01.553Z: Workers have started successfully.
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.335Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.477Z: Cleaning up.
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.562Z: Stopping worker pool...
    Oct 16, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:50:02.269Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:50:02.311Z: Worker pool stopped.
    Oct 16, 2021 6:50:09 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_23_45_39-2842250562811226881 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 037ac73b-0bb0-4937-a07a-f92aba67f33d and timestamp: 2021-10-16T06:50:09.450000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.967

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:50:10 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 49.342 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 50s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/6f3ydepi4safe

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2550

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2550/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11480] Use snippets for DataFrame examples (#15600)

[noreply] Allow multiple Python worker processe to share the same VM. (#15642)

[noreply] [BEAM-12564] Implement Series.hasnans (#15729)

[noreply] Minor: Fix frames_test.py equality check for non-frame outputs (#15734)

[noreply] [BEAM-12769] Adds integration tests for Java Class Lookup based

[Brian Hulette] Address BEAM-4028 cleanup TODOs

[Brian Hulette] Bump Dataflow container to beam-master-20211015

[noreply] [BEAM-13052] Restructure pubsublite folder to move non-user interface


------------------------------------------
[...truncated 355.41 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 12:50:03 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 12:50:03 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 12:50:04 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 12:50:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 12:50:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 12:50:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 12:50:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 12:50:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3716925047183440446.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cdYIGqTgXqb6LD7pgnX6OW9VayY0THyo08TuZAMg3eo.jar
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a75cf6738b1d70d80545e0d19992b1a0320b9c15e388769840b23de10c9a5533> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p1z2c4sdcNgFReDRmZKxoDILnBXjiHaYQLI94QyaVTM.pb
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_17_50_18-4931514561962804645?project=apache-beam-testing
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_17_50_18-4931514561962804645
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_17_50_18-4931514561962804645
    Oct 16, 2021 12:50:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T00:50:21.487Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 12:50:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:28.584Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.334Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.489Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.517Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.581Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.612Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.643Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.988Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:30.074Z: Starting 5 workers in us-central1-c...
    Oct 16, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:40.798Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 12:51:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:13.298Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:51:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:38.862Z: Workers have started successfully.
    Oct 16, 2021 12:51:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:38.894Z: Workers have started successfully.
    Oct 16, 2021 12:52:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.517Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:52:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.648Z: Cleaning up.
    Oct 16, 2021 12:52:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.719Z: Stopping worker pool...
    Oct 16, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:54:33.237Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:54:33.280Z: Worker pool stopped.
    Oct 16, 2021 12:54:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_17_50_18-4931514561962804645 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5a5ea29e-c704-4dd0-bc9c-3ae759c0f777 and timestamp: 2021-10-16T00:54:39.092000000Z:
                     Metric:                    Value:
                   read_time                     8.172
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:54:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 40.146 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 19s
152 actionable tasks: 108 executed, 44 from cache

Publishing build scan...
https://gradle.com/s/mcicesrxiwb3o

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2549

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2549/display/redirect?page=changes>

Changes:

[noreply] Corrected Join Example

[noreply] Minor: Add more links to DataFrame API documentation (#15661)


------------------------------------------
[...truncated 337.49 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c8e4b020b99ccb0e041b11d4c137bdf3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8870401464592203205.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TubJEJo0JReG3C3PG7Umi1ifHANYnYUdzQgNPQ4lCaQ.jar
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 1 seconds
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 51f7cf324c7ed91c38a3a3da5110ddd50da97e5a1cafd1c8407dc2ec9ff6a9fd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UffPMkx-2Rw4o6PaURDd1Q2pflocr9HIQH3C7J_2qf0.pb
    Oct 15, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_11_45_07-14728167554486098967?project=apache-beam-testing
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_11_45_07-14728167554486098967
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_11_45_07-14728167554486098967
    Oct 15, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T18:45:10.475Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:17.324Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.409Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.458Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.508Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.590Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.646Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.782Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:19.208Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:19.275Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:45.501Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:59.183Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:24.829Z: Workers have started successfully.
    Oct 15, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:24.859Z: Workers have started successfully.
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.705Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.853Z: Cleaning up.
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.928Z: Stopping worker pool...
    Oct 15, 2021 6:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:49:17.343Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 6:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:49:17.378Z: Worker pool stopped.
    Oct 15, 2021 6:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_11_45_07-14728167554486098967 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 87cc2f5c-f737-4dfe-8ca5-fb5afe83a34c and timestamp: 2021-10-15T18:49:22.869000000Z:
                     Metric:                    Value:
                   read_time                     8.296
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 34.334 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qy6z4idv3udww

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2548

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2548/display/redirect>

Changes:


------------------------------------------
[...truncated 345.15 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c8e4b020b99ccb0e041b11d4c137bdf3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 12:45:05 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7237647777722747071.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wPqyRNsshdRZwos9sMNmncI6BbS8WkvZnjJyrNhLYjA.jar
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7bd831eba26cff554bfc43ab0f670659aa558e9b19b07e046d85acd4da929b14> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-e9gx66Js_1VL_EOrD2cGWapVjpsZsH4EbYWs1NqSmxQ.pb
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_05_45_17-4262620966407755275?project=apache-beam-testing
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_05_45_17-4262620966407755275
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_05_45_17-4262620966407755275
    Oct 15, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T12:45:20.864Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:25.686Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.531Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.568Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.612Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.671Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.698Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.730Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:27.054Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:27.122Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:54.570Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:23.939Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:53.077Z: Workers have started successfully.
    Oct 15, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:53.110Z: Workers have started successfully.
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.309Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.442Z: Cleaning up.
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.518Z: Stopping worker pool...
    Oct 15, 2021 12:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:49:46.195Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 12:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:49:46.235Z: Worker pool stopped.
    Oct 15, 2021 12:49:54 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_05_45_17-4262620966407755275 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ebe1b329-ee34-4373-a504-d2c48c3425cc and timestamp: 2021-10-15T12:49:54.054000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.892

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:49:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 53.163 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gp7wsbd5awr4k

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2547

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2547/display/redirect?page=changes>

Changes:

[vachan] Remove unnecessary ERROR logging.


------------------------------------------
[...truncated 341.49 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1413020570]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1220110338]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-qOxdI4KUjo3GTP9rF7KeqZGuhs9XeyH-VdOKR6HUq6Q.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test863260845737003274.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rHjSdeQ3ukyVfnPA3YR3A5diMxTM45lJlgL7egZzO3k.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash f542bb940cfd8a6535e46033e4db5e57478c42f7111b63d9e1207cb923256128> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9UK7lAz9imU15GAz5NteV0eMQvcRG2PZ4SB8uSMlYSg.pb
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_23_45_17-8007673697252368974?project=apache-beam-testing
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_23_45_17-8007673697252368974
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_23_45_17-8007673697252368974
    Oct 15, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T06:45:20.827Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:27.826Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.498Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.544Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.578Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.632Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.663Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.696Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:29.031Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:29.224Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:00.783Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:08.656Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:08.687Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 15, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:18.962Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:43.802Z: Workers have started successfully.
    Oct 15, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:43.846Z: Workers have started successfully.
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.189Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.317Z: Cleaning up.
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.389Z: Stopping worker pool...
    Oct 15, 2021 6:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:49:37.079Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 6:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:49:37.150Z: Worker pool stopped.
    Oct 15, 2021 6:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_23_45_17-8007673697252368974 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5691b6bb-cb8c-479a-89ad-0c52e772169f and timestamp: 2021-10-15T06:49:44.088000000Z:
                     Metric:                    Value:
                   read_time                     7.498
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 48.455 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wr3y2o2zps43e

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2546

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2546/display/redirect?page=changes>

Changes:

[Robert Bradshaw] [BEAM-13053] Avoid runner v2 when streaming engine explicitly disabled.

[noreply] [Go SDK] Small typo fixes. (#15725)

[noreply] [BEAM-13038] Migrate to use a context object within

[noreply] [BEAM-12135] Avoid windowing overhead in Spark global GBK. (#15637)

[noreply] [BEAM-12907] Run DataFrame API tests with multiple pandas versions


------------------------------------------
[...truncated 376.22 KB...]
    Oct 15, 2021 12:52:49 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 12:52:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 12:52:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1841175909]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 12:52:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 12:53:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 12:53:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 12:53:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3248217378752106573.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yUaFrTqZh9JOvTHMAfvwkvNh8AMPX1PA5_Yi9fgWuWo.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Oct 15, 2021 12:53:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 8 seconds
    Oct 15, 2021 12:53:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 12:53:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 964d17d19ca3565b6aae1d506eac0fead2fe0c5f08edcbbfd55dd81cbd175fad> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lk0X0ZyjVltqrh1QbqwP6tL-DF8I7cu_1V3YHL0XX60.pb
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_17_53_17-17866899255543482984?project=apache-beam-testing
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_17_53_17-17866899255543482984
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_17_53_17-17866899255543482984
    Oct 15, 2021 12:53:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T00:53:20.184Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 12:53:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:26.758Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.504Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.557Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.581Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.660Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.686Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.711Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:28.019Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:28.091Z: Starting 5 workers in us-central1-c...
    Oct 15, 2021 12:53:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:34.531Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 12:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:58.724Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:58.754Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 15, 2021 12:54:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:09.038Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:32.853Z: Workers have started successfully.
    Oct 15, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:32.888Z: Workers have started successfully.
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.480Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.637Z: Cleaning up.
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.705Z: Stopping worker pool...
    Oct 15, 2021 12:57:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:57:18.655Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 12:57:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:57:18.702Z: Worker pool stopped.
    Oct 15, 2021 12:57:24 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_17_53_17-17866899255543482984 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c3f5e145-79ee-4892-9a8d-370e05968926 and timestamp: 2021-10-15T00:57:24.121000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.211

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:57:24 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 10 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.039 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.691 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 12m 47s
152 actionable tasks: 118 executed, 34 from cache

Publishing build scan...
https://gradle.com/s/z36zgcb4p22ci

Stopped 9 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2545

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2545/display/redirect?page=changes>

Changes:

[ilya.kozyrev] Add java executor to compile java code in created file system

[zyichi] Use daemon thread to report lull operation and move code to

[noreply] Merge pull request #15722 from [BEAM-13022] [Playground] Run code on the

[Luke Cwik] [BEAM-12640] Skip checkerframework on generated code/test libraries for

[Luke Cwik] [BEAM-13015] Add tiny bundle processing benchmark for Java SDK harness.

[noreply] [BEAM-12922] Test MapState putIfAbsent with no read (#15704)


------------------------------------------
[...truncated 388.37 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 12 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c78b4aae5a459d999f1acebc29b3fde3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 12'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 12'
Successfully started process 'Gradle Test Executor 12'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 6:57:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 6:57:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 6:57:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 6:58:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1382602716978478729.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aHvARrbnqwZA9OApVdCw4aJcpkgJqhq69vc7NrAdsss.jar
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 279a1c0e5056401ef0d809adceeedc89d516ee399d54ea9d929664e464986c51> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J5ocDlBWQB7w2Amtzu7cidUW7jmdVOqdkpZk5GSYbFE.pb
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_11_58_08-9255479186759955978?project=apache-beam-testing
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_11_58_08-9255479186759955978
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_11_58_08-9255479186759955978
    Oct 14, 2021 6:58:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T18:58:11.167Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:15.337Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:15.969Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.012Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.041Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.105Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.128Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.165Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.461Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.536Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 6:58:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:32.745Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 6:59:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:00.221Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:59:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:25.107Z: Workers have started successfully.
    Oct 14, 2021 6:59:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:25.132Z: Workers have started successfully.
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.465Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.588Z: Cleaning up.
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.658Z: Stopping worker pool...
    Oct 14, 2021 7:02:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T19:02:07.954Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 7:02:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T19:02:07.991Z: Worker pool stopped.
    Oct 14, 2021 7:02:14 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_11_58_08-9255479186759955978 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c6204950-9d31-4e6b-86cf-2ef768541a0f and timestamp: 2021-10-14T19:02:14.579000000Z:
                     Metric:                    Value:
                   read_time                     8.067
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 7:02:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 12 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 23.941 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 17m 49s
152 actionable tasks: 143 executed, 9 from cache

Publishing build scan...
https://gradle.com/s/aooopaliq5whw

Stopped 11 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2544

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2544/display/redirect>

Changes:


------------------------------------------
[...truncated 337.13 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2182681013556753547.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1uzYiVEanmnzAkA_HHPUF6K7JZWR7NpocHAGTnRM4zc.jar
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a2d810cfe9b69e6de2078cdd4fc6550c8817202c176f6c39ee8a4cbb48e3c6c6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-otgQz-m2nm3iB4zdT8ZVDIgXICwXb2w57opMu0jjxsY.pb
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_05_45_06-10948096631192102164?project=apache-beam-testing
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_05_45_06-10948096631192102164
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_05_45_06-10948096631192102164
    Oct 14, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T12:45:09.946Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:18.349Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.164Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.208Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.233Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.307Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.332Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.362Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.687Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.766Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:45.138Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:03.766Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:29.218Z: Workers have started successfully.
    Oct 14, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:29.248Z: Workers have started successfully.
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.508Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.624Z: Cleaning up.
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.697Z: Stopping worker pool...
    Oct 14, 2021 12:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:49:22.538Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 12:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:49:22.573Z: Worker pool stopped.
    Oct 14, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_05_45_06-10948096631192102164 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6de2c5ce-7bad-46ed-8fef-6e1fa2a2a539 and timestamp: 2021-10-14T12:49:28.636000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.928

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 38.598 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/lbzaidrfvsmni

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2543

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2543/display/redirect?page=changes>

Changes:

[kawaigin] Prepare release of apache-beam-jupyterlab-sidepanel v2.0.0


------------------------------------------
[...truncated 340.68 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2812133051463427131.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6beI723LVZS0pLxVGEBnw6zewY5KmzSVKGcVErlfRNk.jar
    Oct 14, 2021 6:45:15 AM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 10 seconds
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 03522e21521e8fc374c82ce2a40b97d2f2fce5f0b842b23d20a4d35d1c1efee3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-A1IuIVIej8N0yCzipAuX0vL85fC4QrI9IKTTXRwe_uM.pb
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_23_45_18-10186905973464468890?project=apache-beam-testing
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_23_45_18-10186905973464468890
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_23_45_18-10186905973464468890
    Oct 14, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T06:45:21.917Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.003Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.729Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.757Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.778Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.821Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.838Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.872Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:28.170Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:28.242Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:33.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:04.503Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:04.535Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 14, 2021 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:14.792Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:39.026Z: Workers have started successfully.
    Oct 14, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:39.055Z: Workers have started successfully.
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.628Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.830Z: Cleaning up.
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.886Z: Stopping worker pool...
    Oct 14, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:49:25.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:49:25.775Z: Worker pool stopped.
    Oct 14, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_23_45_18-10186905973464468890 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c47a9bd9-de82-4ab7-b88d-2ee2e9969c57 and timestamp: 2021-10-14T06:49:30.975000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.707

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.38 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cbbmmbx4gsrsa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2542

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2542/display/redirect?page=changes>

Changes:

[Udi Meiri] Release guide: minor updates

[noreply] [BEAM-12644] Javascript files moved to assets (#15653)


------------------------------------------
[...truncated 342.21 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 12:46:42 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 12:46:43 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 12:46:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@372074801]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1712537381]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 12:47:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3586008666161472238.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pNWakzO7_S1qqCXYC1QPshbxn3hygdZzQNJ3WkDEcto.jar
    Oct 14, 2021 12:47:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 5 seconds
    Oct 14, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b74abe21a4e2a14b1a671023e8f87a0cc70403e0506ae3501194dc1a6050d8d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-t0q-IaTioUsaZxAj6Ph6DMcEA-BQauNQEZTcGmBQ2NM.pb
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_17_47_17-4909132220303045062?project=apache-beam-testing
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_17_47_17-4909132220303045062
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_17_47_17-4909132220303045062
    Oct 14, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T00:47:20.452Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:27.790Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.482Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.514Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.546Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.609Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.635Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.670Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.963Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:29.032Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:53.342Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:13.924Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:42.653Z: Workers have started successfully.
    Oct 14, 2021 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:42.684Z: Workers have started successfully.
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.075Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.247Z: Cleaning up.
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.337Z: Stopping worker pool...
    Oct 14, 2021 12:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:51:36.932Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 12:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:51:36.972Z: Worker pool stopped.
    Oct 14, 2021 12:51:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_17_47_17-4909132220303045062 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 470870de-9f18-49e4-8be5-bba995fae54a and timestamp: 2021-10-14T00:51:43.050000000Z:
                     Metric:                    Value:
                   read_time                     6.984
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:51:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 17.345 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vgs62wac2ebjy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2541

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2541/display/redirect?page=changes>

Changes:

[dmitrii_kuzin] [BEAM-12730] Custom delimiter always bytes

[noreply] Minor: Add base_func to Series.aggregate expression names (#15706)

[noreply] [BEAM-13013] Extend cross language expansion. (#15698)

[noreply] Minor: Clarify confusing boolean logic for selecting PerWindowInvoker

[noreply] [BEAM-4424] Execute hooks in enabling order (#15713)

[noreply] [BEAM-11097] Update cache with metrics counting, windowing (#15717)


------------------------------------------
[...truncated 337.50 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8420574861511368776.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-P2Bhkz_sJfL7Q44uCBCqe2Xl4LF6Avhx3qXt_9Nkgss.jar
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f0b712b6dd7ee126e924bf9d0f3f6ffc38d2a1f50187947e10c8b45c2f6700bd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8LcStt1-4SbpJL-dDz9v_DjSofUBh5R-EMi0XC9nAL0.pb
    Oct 13, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_11_45_08-7133238093593566270?project=apache-beam-testing
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_11_45_08-7133238093593566270
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_11_45_08-7133238093593566270
    Oct 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T18:45:11.818Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:17.665Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.408Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.439Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.467Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.537Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.573Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.597Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.919Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.986Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:28.162Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 6:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:16.983Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 6:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:42.016Z: Workers have started successfully.
    Oct 13, 2021 6:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:42.044Z: Workers have started successfully.
    Oct 13, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.336Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.505Z: Cleaning up.
    Oct 13, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.581Z: Stopping worker pool...
    Oct 13, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:49:39.615Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:49:39.658Z: Worker pool stopped.
    Oct 13, 2021 6:49:46 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_11_45_08-7133238093593566270 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 59461b1a-5fe0-47e5-888d-d482f1a0e346 and timestamp: 2021-10-13T18:49:46.513000000Z:
                     Metric:                    Value:
                   read_time                     8.923
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:49:46 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 57.197 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ejnkppnwz66y6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2540

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2540/display/redirect>

Changes:


------------------------------------------
[...truncated 346.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 12:45:22 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6472796556154093260.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6-OpaFPbxZfQnvZ74BQbvd3dTKRJrYuR9bFPtZrnM04.jar
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9083bf457d324a2e69be7624cd67d20503d4061abda0914c7de495e5a4f7e74f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-kIO_RX0ySi5pvnYkzWfSBQPUBhq9oJFMfeSV5aT3508.pb
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_05_45_37-13725460614555904800?project=apache-beam-testing
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_05_45_37-13725460614555904800
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_05_45_37-13725460614555904800
    Oct 13, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T12:45:40.850Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.122Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.914Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.952Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.982Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.053Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.074Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.096Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.401Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.462Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:57.500Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:46:31.975Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:01.126Z: Workers have started successfully.
    Oct 13, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:01.154Z: Workers have started successfully.
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:26.826Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:26.947Z: Cleaning up.
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:27.019Z: Stopping worker pool...
    Oct 13, 2021 12:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:50:00.229Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 12:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:50:00.292Z: Worker pool stopped.
    Oct 13, 2021 12:50:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_05_45_37-13725460614555904800 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 134018ea-e282-4aed-a499-d629c31e0489 and timestamp: 2021-10-13T12:50:06.801000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.658

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:50:07 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 49.417 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/xsdl7nfgw72cu

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2539

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2539/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12513] Add JIRAs for missing Go SDK features (#15658)


------------------------------------------
[...truncated 338.53 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5329754033931223608.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-eg3j8nxroWsXWhvqR6sqs5vs332QzaKjD1vsyC9Z9YE.jar
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 26b54604c4b76432d3c68fd9600bd0ceb52d705abda6f6bad0fb171a3da65ab8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JrVGBMS3ZDLTxo_ZYAvQzrUtcFq9pva60PsXGj2mWrg.pb
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_23_45_08-16050976645542794067?project=apache-beam-testing
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_23_45_08-16050976645542794067
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_23_45_08-16050976645542794067
    Oct 13, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T06:45:14.702Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:22.328Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.090Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.130Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.164Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.262Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.324Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.433Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:24.003Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:24.090Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:47.036Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 6:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:11.991Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:37.849Z: Workers have started successfully.
    Oct 13, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:37.930Z: Workers have started successfully.
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:04.919Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:05.051Z: Cleaning up.
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:05.152Z: Stopping worker pool...
    Oct 13, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:49:28.840Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:49:28.887Z: Worker pool stopped.
    Oct 13, 2021 6:50:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_23_45_08-16050976645542794067 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 36707a6e-b0ec-428a-b3ca-5f3bcf4395e4 and timestamp: 2021-10-13T06:50:29.606000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.513

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:50:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 40.216 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/nyuyzlzxcc3aw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2538

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2538/display/redirect?page=changes>

Changes:

[cgray] [BEAM-7169] Fix singleton assertion in PAssert javadoc.

[noreply] Revert "[BEAM-10913] - Forcing update of YAML file by running kubectl

[kawaigin] [BEAM-12997] Upgraded labextension to V3


------------------------------------------
[...truncated 339.08 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5890577408425665627.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ovOpIMFGfYMPgXz9nCHRcqfUtqQ9XmzJ40yBKdUcEAA.jar
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 3d32be64863b6479ae02f48feaeb168c366b2a73d3888d04661245ff17bcd1fc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PTK-ZIY7ZHmuAvSP6usWjDZrKnPTiI0EZhJF_xe80fw.pb
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_17_45_05-1080144184160942624?project=apache-beam-testing
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_17_45_05-1080144184160942624
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_17_45_05-1080144184160942624
    Oct 13, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T00:45:09.007Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:15.973Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.784Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.871Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.908Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.994Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.022Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.053Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.515Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.605Z: Starting 5 workers in us-central1-c...
    Oct 13, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:44.250Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:51.425Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:51.450Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 13, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:01.702Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:26.111Z: Workers have started successfully.
    Oct 13, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:26.144Z: Workers have started successfully.
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.104Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.327Z: Cleaning up.
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.423Z: Stopping worker pool...
    Oct 13, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:49:19.554Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:49:19.595Z: Worker pool stopped.
    Oct 13, 2021 12:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_17_45_05-1080144184160942624 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cc740fa4-6677-4c79-af50-065edcf5ff5a and timestamp: 2021-10-13T00:49:25.658000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.499

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.55 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/x2yytbifuhxm6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2537

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2537/display/redirect?page=changes>

Changes:

[heejong] Update Dataflow Python container tag

[noreply] Merge pull request #15667 from [BEAM-12730] Add custom delimiters to

[noreply] Merge pull request #15700 from [BEAM-12987] [Playground] Add page UI


------------------------------------------
[...truncated 338.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 6:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9156011587531586222.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G5FJjMI4Zb3G-2y_r_qh-cNidmMkm8Mkw7mRybaDqzc.jar
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cada289d233b484d41187a59ee827447e170ebe67ae40dbc9f82b02944aaf846> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ytoonSM7SE1BGHpZ7oJ0R-Fw6-Z65A28n4KwKUSq-EY.pb
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_11_45_12-3882144811092247120?project=apache-beam-testing
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_11_45_12-3882144811092247120
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_11_45_12-3882144811092247120
    Oct 12, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T18:45:15.138Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:49.229Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:49.998Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.032Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.064Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.122Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.158Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.183Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.462Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.534Z: Starting 5 workers in us-central1-a...
    Oct 12, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:57.251Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:33.207Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:58.325Z: Workers have started successfully.
    Oct 12, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:58.360Z: Workers have started successfully.
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.575Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.718Z: Cleaning up.
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.778Z: Stopping worker pool...
    Oct 12, 2021 6:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:49:46.392Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 6:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:49:46.437Z: Worker pool stopped.
    Oct 12, 2021 6:49:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_11_45_12-3882144811092247120 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6cfe67a8-8759-499b-8a38-a5522f455ed2 and timestamp: 2021-10-12T18:49:52.693000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.854

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:49:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 58.361 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cyvoyr2vpez3i

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2536

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2536/display/redirect>

Changes:


------------------------------------------
[...truncated 338.81 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7895948252583150716.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-arSWOV_dfe951arcAp7L8XDletVdhHzyc8DIuEdjsFI.jar
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 41fcc841ddfc9ef59ae9760a31a90d082317583a9adbe6d9b045936d1e56e399> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QfzIQd38nvWa6XYKMakNCCMXWDqa2-bZsEWTbR5W45k.pb
    Oct 12, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_05_45_05-2662568501819662847?project=apache-beam-testing
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_05_45_05-2662568501819662847
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_05_45_05-2662568501819662847
    Oct 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T12:45:09.039Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:15.881Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.740Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.786Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.884Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.916Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.951Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:17.332Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:17.408Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:49.636Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:50.458Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:50.488Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 12:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:00.737Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:24.496Z: Workers have started successfully.
    Oct 12, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:24.530Z: Workers have started successfully.
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.321Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.532Z: Cleaning up.
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.630Z: Stopping worker pool...
    Oct 12, 2021 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:49:17.532Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:49:17.600Z: Worker pool stopped.
    Oct 12, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_05_45_05-2662568501819662847 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8d27dced-5f5c-4692-850e-013bd0a03311 and timestamp: 2021-10-12T12:49:24.938000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.291

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:49:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 36.719 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wor7ifwppuxya

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2535

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2535/display/redirect?page=changes>

Changes:

[noreply] Release 2.33 documentation update (#15543)

[noreply] Merge pull request #15441 from [BEAM-8823] Make FnApiRunner work by


------------------------------------------
[...truncated 339.65 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 6:45:00 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1656578058968929539.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vB53NHrbQW8QZoMCtAlaqt_fDfHQ9F8X8WYxoWBOEus.jar
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a1d7903616da2308799c898c85df2e929be1bf94611a7f759b2e34c14cdda2d4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-odeQNhbaIwh5nImMhd8ukpvhv5RhGn91my40wUzdotQ.pb
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_23_45_13-10942643506701589531?project=apache-beam-testing
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_23_45_13-10942643506701589531
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_23_45_13-10942643506701589531
    Oct 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T06:45:17.883Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:29.764Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.505Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.556Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.591Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.646Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.678Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.743Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:31.159Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:31.249Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:52.392Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:04.869Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:04.906Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:15.158Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:39.226Z: Workers have started successfully.
    Oct 12, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:39.256Z: Workers have started successfully.
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.582Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.744Z: Cleaning up.
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.835Z: Stopping worker pool...
    Oct 12, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:49:22.828Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:49:22.868Z: Worker pool stopped.
    Oct 12, 2021 6:49:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_23_45_13-10942643506701589531 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 05d6afe8-5791-4c81-a089-8ca32e5e56cb and timestamp: 2021-10-12T06:49:29.700000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.784

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 34.085 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kmemclpwcfima

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2534

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2534/display/redirect?page=changes>

Changes:

[rogan.o.morrow] [BEAM-12875] Register file systems in SparkExecutableStageFunction

[Kyle Weaver] [BEAM-12694] Include datetime in dicom test dataset name.

[noreply] [BEAM-12960] Add extra context for structural DoFn ProcessElement error


------------------------------------------
[...truncated 339.54 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 12:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6089847685525497808.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j0RlgzwqoJ9IuGP71TKsPPY-HLJ9FhDCNoESendTWzg.jar
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash dc0f7feb3f06ad79adde85b473982d38bc7dc1dc917e00a73b32700b1f4e1504> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3A9_6z8GrXmt3oW0c5gtOLx9wdyRfgCnOzJwCx9OFQQ.pb
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_17_45_12-3290421013320524464?project=apache-beam-testing
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_17_45_12-3290421013320524464
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_17_45_12-3290421013320524464
    Oct 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T00:45:15.981Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:22.749Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.691Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.735Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.755Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.818Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.882Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:24.265Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:24.347Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:34.481Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:54.613Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:54.641Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:04.919Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:28.753Z: Workers have started successfully.
    Oct 12, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:28.779Z: Workers have started successfully.
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.628Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.777Z: Cleaning up.
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.853Z: Stopping worker pool...
    Oct 12, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:49:20.531Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:49:20.571Z: Worker pool stopped.
    Oct 12, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_17_45_12-3290421013320524464 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d1ffe6f6-3f14-474e-aab9-210557f33f07 and timestamp: 2021-10-12T00:49:26.446000000Z:
                     Metric:                    Value:
                   read_time                     8.046
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 33.115 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dx2eeycq4b3tm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2533

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2533/display/redirect?page=changes>

Changes:

[rogelio.hernandez] [BEAM-12371][BEAM-12373] Fixed page navigation


------------------------------------------
[...truncated 338.42 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4286376879441808855.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LZNM0OjPKJ8YxiJnDJthhP5e5RkAmgFi1XNpGVXx5GQ.jar
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash d8c0b5f14e368b35fc370d35c4017b16d55a452ebed7138b264fbac650535a02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2MC18U42izX8Nw01xAF7FtVaRS6-1xOLJk-6xlBTWgI.pb
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_11_45_05-18093358603790928327?project=apache-beam-testing
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_11_45_05-18093358603790928327
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_11_45_05-18093358603790928327
    Oct 11, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T18:45:08.828Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.276Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.904Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.924Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.949Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.007Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.026Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.051Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.359Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.413Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:43.092Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:44.397Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:44.426Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 11, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:54.708Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:19.615Z: Workers have started successfully.
    Oct 11, 2021 6:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:19.641Z: Workers have started successfully.
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.705Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.834Z: Cleaning up.
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.903Z: Stopping worker pool...
    Oct 11, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:49:11.811Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:49:11.847Z: Worker pool stopped.
    Oct 11, 2021 6:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_11_45_05-18093358603790928327 finished with status DONE.


Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2bcaf07e-50cc-4f88-b2ad-67a308acc5cd and timestamp: 2021-10-11T18:49:17.896000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.985

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 28.983 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/76oq36igpdb5o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2532

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2532/display/redirect>

Changes:


------------------------------------------
[...truncated 337.79 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test778636717883802267.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lKsOXenpkv6QMTNLvkLm8r-EXGjI6lC38EmWyvOVDn8.jar
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash 7c6d3cf99c79f389dc609e13d48842d3075d221f2d42ccf0f31b76455950b2e5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fG08-Zx584ncYJ4T1IhC0wddIh8tQszw8xt2RVlQsuU.pb
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_05_45_04-1710193203739782743?project=apache-beam-testing
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_05_45_04-1710193203739782743
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_05_45_04-1710193203739782743
    Oct 11, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T12:45:07.518Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:12.466Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.187Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.225Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.260Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.322Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.361Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.391Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.748Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.817Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:20.881Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:53.869Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:22.354Z: Workers have started successfully.
    Oct 11, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:22.389Z: Workers have started successfully.
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.120Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.253Z: Cleaning up.
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.321Z: Stopping worker pool...
    Oct 11, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:49:25.496Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:49:25.540Z: Worker pool stopped.
    Oct 11, 2021 12:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_05_45_04-1710193203739782743 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a8ebbd03-21ad-43af-b43f-7aa5e9bf972a and timestamp: 2021-10-11T12:49:30.729000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.623

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 43.437 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/msnw2f5fkfaz6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2531

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2531/display/redirect>

Changes:


------------------------------------------
[...truncated 337.93 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1003844912941934074.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-dd-ZacZ0zsfWNkqOsDmw4e1TJ2wG79wAi6KgZu2aVD8.jar
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c1ac01cffce0988c39de57b60bc40d02a652a96211e13281de9e170668b7426e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wawBz_zgmIw53le2C8QNAqZSqWIR4TKB3p4XBmi3Qm4.pb
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_23_45_09-8607510149780975094?project=apache-beam-testing
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_23_45_09-8607510149780975094
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_23_45_09-8607510149780975094
    Oct 11, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T06:45:13.115Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:18.609Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.161Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.206Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.237Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.293Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.318Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.349Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.674Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.735Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:40.128Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:00.053Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:25.928Z: Workers have started successfully.
    Oct 11, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:25.951Z: Workers have started successfully.
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:53.973Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:54.085Z: Cleaning up.
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:54.140Z: Stopping worker pool...
    Oct 11, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:49:15.456Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:49:15.499Z: Worker pool stopped.
    Oct 11, 2021 6:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_23_45_09-8607510149780975094 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 42eded31-3dcc-4550-b08b-ba58c9fd83c4 and timestamp: 2021-10-11T06:49:21.759000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.357

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:49:22 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 30.191 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pf46l4npmi4u6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2530

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2530/display/redirect>

Changes:


------------------------------------------
[...truncated 339.46 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4218387626765777387.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OqCx62PIOcIQyNk2n9dSMNWhhAjS40_CeL6Cs7lQYYU.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Oct 11, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 95263ea5b7317c83ac9660b3d2b6f1c92409ff428a2713ed9f3c00c0ea702db7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lSY-pbcxfIOslmCz0rbxySQJ_0KKJxPtnzwAwOpwLbc.pb
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_17_45_04-6294139650922413111?project=apache-beam-testing
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_17_45_04-6294139650922413111
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_17_45_04-6294139650922413111
    Oct 11, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T00:45:07.738Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:13.816Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.579Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.619Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.655Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.732Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.752Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.776Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:15.128Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:15.194Z: Starting 5 workers in us-central1-c...
    Oct 11, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:40.712Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:45.036Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:45.055Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 11, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:55.342Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:20.260Z: Workers have started successfully.
    Oct 11, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:20.294Z: Workers have started successfully.
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:51.865Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:52.022Z: Cleaning up.
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:52.096Z: Stopping worker pool...
    Oct 11, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:49:14.300Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:49:14.495Z: Worker pool stopped.
    Oct 11, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_17_45_04-6294139650922413111 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 05c48868-95eb-4f51-98cd-81588487acc7 and timestamp: 2021-10-11T00:49:21.537000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.828

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 33.878 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 3s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ifyxiyzfltgzy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2529

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2529/display/redirect>

Changes:


------------------------------------------
[...truncated 339.05 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3532494700713359605.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZPgtTCDcy_jy0olujt5ro-El35vukIyHbo8kpFGe-wE.jar
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7e63006485f5391bb51d6d13a7b3c1593fe22df7a537f7e61d6c7808099e03d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fmMAZIX1ORu1HW0Tp7PBWT_iLfelN_fmHWx4CAmeA9M.pb
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_11_45_09-12472628253534294307?project=apache-beam-testing
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_11_45_09-12472628253534294307
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_11_45_09-12472628253534294307
    Oct 10, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T18:45:12.793Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.110Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.772Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.806Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.831Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.914Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.945Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.978Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:20.299Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:20.365Z: Starting 5 workers in us-central1-a...
    Oct 10, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:29.810Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 6:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:55.494Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:55.530Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 10, 2021 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:05.793Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:29.308Z: Workers have started successfully.
    Oct 10, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:29.331Z: Workers have started successfully.
    Oct 10, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:57.900Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:58.034Z: Cleaning up.
    Oct 10, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:58.137Z: Stopping worker pool...
    Oct 10, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:49:18.101Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:49:18.130Z: Worker pool stopped.
    Oct 10, 2021 6:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_11_45_09-12472628253534294307 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5d20fcb0-e9ac-468b-b2b2-cf607b57aeb4 and timestamp: 2021-10-10T18:49:26.057000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.163

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 35.464 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vz7tgyguu7d46

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2528

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2528/display/redirect>

Changes:


------------------------------------------
[...truncated 338.49 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1895807884124141554.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ERQ9yF0p56hwD1RWzYwEiAM_g4s2mISho-61HNAs5KU.jar
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash de07d5a1f7805377cb4931e4574f19ae2112c9837fe6f3413354dda332cb2c30> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3gfVofeAU3fLSTHkV08ZriESyYN_5vNBM1TdozLLLDA.pb
    Oct 10, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_05_45_09-2904342065627502034?project=apache-beam-testing
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_05_45_09-2904342065627502034
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_05_45_09-2904342065627502034
    Oct 10, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T12:45:12.297Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:18.367Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.055Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.105Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.126Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.186Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.210Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.245Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.537Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.616Z: Starting 5 workers in us-central1-a...
    Oct 10, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:35.352Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:04.443Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:31.118Z: Workers have started successfully.
    Oct 10, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:31.163Z: Workers have started successfully.
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.048Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.206Z: Cleaning up.
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.280Z: Stopping worker pool...
    Oct 10, 2021 12:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:49:27.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 12:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:49:27.783Z: Worker pool stopped.
    Oct 10, 2021 12:49:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_05_45_09-2904342065627502034 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1c35b425-5ff9-4559-9a91-7aac8090385b and timestamp: 2021-10-10T12:49:33.012000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.716

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 41.982 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b2cpjsgi5dsco

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2527

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2527/display/redirect>

Changes:


------------------------------------------
[...truncated 339.74 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 6:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2451411109503935012.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NNFK-BsKFY9axwHRZvvrKPCF9gDEYqqJ7gBHAF8oDZI.jar
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 942a12ffd5f7c57c67d90c09b1d581b1ff4231356abc11a0f96b5317f2605350> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lCoS_9X3xXxn2QwJsdWBsf9CMTVqvBGg-WtTF_JgU1A.pb
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_23_45_16-9184379378238017825?project=apache-beam-testing
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_23_45_16-9184379378238017825
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_23_45_16-9184379378238017825
    Oct 10, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T06:45:19.351Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:25.639Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.316Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.355Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.392Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.472Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.501Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.548Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.893Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.965Z: Starting 5 workers in us-central1-c...
    Oct 10, 2021 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:47.868Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:07.715Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:32.477Z: Workers have started successfully.
    Oct 10, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:32.506Z: Workers have started successfully.
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.606Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.764Z: Cleaning up.
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.867Z: Stopping worker pool...
    Oct 10, 2021 6:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:49:16.626Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 6:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:49:16.661Z: Worker pool stopped.
    Oct 10, 2021 6:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_23_45_16-9184379378238017825 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 62fb6eee-6aeb-4ca6-9bd7-cac4c37200f7 and timestamp: 2021-10-10T06:49:23.372000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.419

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 26.772 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6fm4dycguo2yo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2526

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2526/display/redirect>

Changes:


------------------------------------------
[...truncated 339.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4350555590823145877.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-S8hEUruBVC4l4WRy8xgM4mh9koDJZiIwn_wjiJQHuME.jar
    Oct 10, 2021 12:45:13 AM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 11 seconds
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104895 bytes, hash da7538f12b6d1cfe170b19e872238c117d7a99f2f9196b42384201cd72348641> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2nU48SttHP4XCxnociOMEX16mfL5GWtCOEIBzXI0hkE.pb
    Oct 10, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_17_45_17-11111101773425098417?project=apache-beam-testing
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_17_45_17-11111101773425098417
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_17_45_17-11111101773425098417
    Oct 10, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T00:45:20.252Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:26.819Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.560Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.602Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.630Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.704Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.731Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.761Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:28.077Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:28.141Z: Starting 5 workers in us-central1-c...
    Oct 10, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:50.024Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:57.936Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:57.967Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 10, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:08.231Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:35.013Z: Workers have started successfully.
    Oct 10, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:35.046Z: Workers have started successfully.
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:09.896Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:10.047Z: Cleaning up.
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:10.121Z: Stopping worker pool...
    Oct 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:49:27.359Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:49:27.394Z: Worker pool stopped.
    Oct 10, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_17_45_17-11111101773425098417 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8997a3ba-c2a6-4ea2-b65e-28fdd0c0a3a6 and timestamp: 2021-10-10T00:49:32.700000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                       7.5

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 44.061 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/njgwonvoougw2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2525

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2525/display/redirect>

Changes:


------------------------------------------
[...truncated 337.82 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8695599148799655228.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PYDgNAkrANxWBl5Rjor_KQiFweeelrGmrKf8bNXKtHs.jar
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 07414eb65127ed6ea6c1e88cc3a245556e87e5a23b602e84cc4966d4a0a862d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-B0FOtlEn7W6mweiMw6JFVW6H5aI7YC6EzElm1KCoYtY.pb
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_11_45_13-17975078393602797598?project=apache-beam-testing
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_11_45_13-17975078393602797598
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_11_45_13-17975078393602797598
    Oct 09, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T18:45:16.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:24.697Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.541Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.575Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.622Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.695Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.724Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.758Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:26.094Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:26.171Z: Starting 5 workers in us-central1-a...
    Oct 09, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:38.467Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:05.933Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:31.652Z: Workers have started successfully.
    Oct 09, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:31.678Z: Workers have started successfully.
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.011Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.165Z: Cleaning up.
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.228Z: Stopping worker pool...
    Oct 09, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:49:20.907Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:49:20.943Z: Worker pool stopped.
    Oct 09, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_11_45_13-17975078393602797598 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dbde7288-f025-433b-8031-b2f85252ab44 and timestamp: 2021-10-09T18:49:27.241000000Z:
                     Metric:                    Value:
                   read_time                      9.48
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 37.427 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/7o72a24nidyy2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2524

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2524/display/redirect>

Changes:


------------------------------------------
[...truncated 337.85 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6429331046276212595.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GJg_cz8ZP5cdiPRAulNlDxPvOwcgfzKERqgxI5q_XX4.jar
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash e452720a9550ad2bd29528c0b3f26c9c04a35c0ebea53c2238e6dcd7940cb26f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5FJyCpVQrSvSlSjAs_JsnASjXA6-pTwiOObc15QMsm8.pb
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_05_45_05-17578209071882798612?project=apache-beam-testing
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_05_45_05-17578209071882798612
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_05_45_05-17578209071882798612
    Oct 09, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T12:45:08.767Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:14.684Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.367Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.414Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.454Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.696Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.758Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.821Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:16.209Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:16.279Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:24.743Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:55.109Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:20.437Z: Workers have started successfully.
    Oct 09, 2021 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:20.482Z: Workers have started successfully.
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.055Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.204Z: Cleaning up.
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.287Z: Stopping worker pool...
    Oct 09, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:49:16.051Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:49:16.093Z: Worker pool stopped.
    Oct 09, 2021 12:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_05_45_05-17578209071882798612 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4ec63783-fd4a-4c71-821e-ded71f0031c4 and timestamp: 2021-10-09T12:49:22.247000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.405

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:49:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 34.782 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6me5cuxycx4v2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2523

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2523/display/redirect>

Changes:


------------------------------------------
[...truncated 337.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7434095934576527111.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6R1Ctu7wsxDoH_2sFKHxePjiL4CP_7Dd6l4Qkl6r7Tk.jar
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f04c910f7290ecba36f460c1f4c0d907204147f4c663c395dcb183a9d4b99be1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8EyRD3KQ7Lo29GDB9MDZByBBR_TGY8OV3LGDqdS5m-E.pb
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_23_45_03-16159411720345908142?project=apache-beam-testing
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_23_45_03-16159411720345908142
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_23_45_03-16159411720345908142
    Oct 09, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T06:45:08.660Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:16.191Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.123Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.194Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.251Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.337Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.372Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.406Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.751Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.820Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:39.118Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:49.011Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:22.497Z: Workers have started successfully.
    Oct 09, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:22.527Z: Workers have started successfully.
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.233Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.372Z: Cleaning up.
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.432Z: Stopping worker pool...
    Oct 09, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:49:15.630Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:49:15.679Z: Worker pool stopped.
    Oct 09, 2021 6:49:22 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_23_45_03-16159411720345908142 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f3b58762-f298-45f0-b85c-7b874a147d1d and timestamp: 2021-10-09T06:49:23.030000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.023

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.25 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/p42jgv4ly4nse

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2522

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2522/display/redirect?page=changes>

Changes:

[Brian Hulette] Add DataFrame changes to CHANGES.md

[stranniknm] [BEAM-12964]: add code editor component

[stranniknm] [BEAM-12966]: load initial example on page load

[stranniknm] [BEAM-12964]: fix editor scrolling

[stranniknm] [BEAM-12964]: add dark theme support

[stranniknm] [BEAM-12964]: add licence to example repository file

[stranniknm] [BEAM-12964]: extract playground page providers

[stranniknm] [BEAM-12964]: refactor styles

[Brian Hulette] Don't use run_pytest.sh for pyarrow tests

[Brian Hulette] Fail in run_pytest.sh if -m is specified

[noreply] [BEAM-10913] - Forcing update of YAML file by running kubectl apply

[noreply] [BEAM-10955] Flink Java Runner test flake: Could not find Flink job

[noreply] [BEAM-10114] Bump Pub/Sub Lite version (#15640)


------------------------------------------
[...truncated 368.68 KB...]
Gradle Test Executor 7 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 7'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 7'
Successfully started process 'Gradle Test Executor 7'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 12:48:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 12:48:09 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 12:48:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 12:48:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3372550691481844343.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qO8nTAgZslIfmDxxT4hKCkbGiWJ6t-61EjzOHJwnthI.jar
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 12:48:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5fae060ff678a2dfac04ce6f5b4a175a7dc95e72d4dd6006f7d95f963510b6e8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-X64GD_Z4ot-sBM5vW0oXWn3JXnLU3WAG99lfljUQtug.pb
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_17_48_25-12615639983825538711?project=apache-beam-testing
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_17_48_25-12615639983825538711
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_17_48_25-12615639983825538711
    Oct 09, 2021 12:48:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T00:48:30.707Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:38.374Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.232Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.274Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.300Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.364Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.390Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.429Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 12:48:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.776Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:48:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.856Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 12:48:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:57.796Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:15.072Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:15.093Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 09, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:25.327Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:49.706Z: Workers have started successfully.
    Oct 09, 2021 12:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:49.743Z: Workers have started successfully.
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.312Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.468Z: Cleaning up.
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.539Z: Stopping worker pool...
    Oct 09, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:52:36.896Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:52:36.940Z: Worker pool stopped.
    Oct 09, 2021 12:52:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_17_48_25-12615639983825538711 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f378ce38-0a90-43cf-a806-3f77f31bfec9 and timestamp: 2021-10-09T00:52:43.144000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.449

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:52:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 7 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 38.588 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 24s
152 actionable tasks: 116 executed, 36 from cache

Publishing build scan...
https://gradle.com/s/t6cg2rbizsfpo

Stopped 6 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2521

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2521/display/redirect?page=changes>

Changes:

[noreply] fixing test link

[noreply] fix linter issues


------------------------------------------
[...truncated 338.58 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d68735da05c09ea3738895ffcf944360
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5951849471958619263.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LCJQNDdw31EDt8ENNwxdQqjaSYT-FEha-CXsV04IW-8.jar
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b26819f12f6c9f293a9986ef45abefc9296db922e2dff4cb55a9bed344fee486> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-smgZ8S9snyk6mYbvRavvySltuSLi3_TLVam-00T-5IY.pb
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_11_45_05-6983261578214034220?project=apache-beam-testing
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_11_45_05-6983261578214034220
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_11_45_05-6983261578214034220
    Oct 08, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T18:45:08.856Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:16.868Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.755Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.809Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.847Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.908Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.939Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.979Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:18.381Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:18.524Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:46.884Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:47.445Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:47.467Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:57.714Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:20.651Z: Workers have started successfully.
    Oct 08, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:20.691Z: Workers have started successfully.
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.095Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.320Z: Cleaning up.
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.432Z: Stopping worker pool...
    Oct 08, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:49:12.188Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:49:12.229Z: Worker pool stopped.
    Oct 08, 2021 6:49:18 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_11_45_05-6983261578214034220 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6d170fa9-1d0f-473c-808b-035da0668d78 and timestamp: 2021-10-08T18:49:18.446000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.924

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 31.732 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ps52evv7hlsfo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2520

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2520/display/redirect>

Changes:


------------------------------------------
[...truncated 337.76 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4968577707565766474.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1_Mp92kvAf_eZDCa7pLVZa6jjGClDzDfo2JxSQwLOnU.jar
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 69201f95e6afaacccc0b6e43ebbc28ce994557a7d3ad2d47bb65a3cf205c333f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aSAfleavqszMC25D67wozplFV6fTrS1Hu2WjzyBcMz8.pb
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_05_45_06-14664077179057365781?project=apache-beam-testing
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_05_45_06-14664077179057365781
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_05_45_06-14664077179057365781
    Oct 08, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T12:45:09.777Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:16.629Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.357Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.395Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.423Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.496Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.527Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.562Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.937Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:18.023Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:22.802Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:51.865Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:51.891Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:02.157Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:22.767Z: Workers have started successfully.
    Oct 08, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:22.784Z: Workers have started successfully.
    Oct 08, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.002Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.154Z: Cleaning up.
    Oct 08, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.281Z: Stopping worker pool...
    Oct 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:49:21.185Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:49:21.226Z: Worker pool stopped.
    Oct 08, 2021 12:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_05_45_06-14664077179057365781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    Load test results for test (ID): 26cd6553-0743-4d4a-be5f-fec482014207 and timestamp: 2021-10-08T12:49:26.567000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.505

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.981 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/k367ievlu7mtm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2519

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2519/display/redirect?page=changes>

Changes:

[Udi Meiri] Release script fixes


------------------------------------------
[...truncated 338.95 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8849610813195269412.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lppfSxFQkpTrMDhcxPnsubAVMFl9L_sgz_BkQEn3s8E.jar
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 87f9be0378d066805f93f79b59088bffe9ae05a1edfda9b676dd70cbdabc3225> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h_m-A3jQZoBfk_ebWQiL_-muBaHt_am2dt1wy9q8MiU.pb
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_23_45_05-7467560351167534370?project=apache-beam-testing
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_23_45_05-7467560351167534370
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_23_45_05-7467560351167534370
    Oct 08, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T06:45:09.070Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:15.595Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.271Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.311Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.340Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.390Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.414Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.445Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.801Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.897Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:23.817Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:52.476Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:52.526Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:02.823Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:27.877Z: Workers have started successfully.
    Oct 08, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:27.907Z: Workers have started successfully.
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:56.830Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:56.973Z: Cleaning up.
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:57.064Z: Stopping worker pool...
    Oct 08, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:49:13.333Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:49:13.388Z: Worker pool stopped.
    Oct 08, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_23_45_05-7467560351167534370 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c220cbbe-b40e-4fc0-944b-1e0464547291 and timestamp: 2021-10-08T06:49:18.898000000Z:
                     Metric:                    Value:
                   read_time                     8.396
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:49:19 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 30.615 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 0s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yulpcrfjc3op4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2518

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2518/display/redirect?page=changes>

Changes:

[noreply] Revert "[BEAM-12993] Update to Debezium 1.7.0.Final (#15636)"

[noreply] [BEAM-12769] Few fixes related to Java Class Lookup based cross-language


------------------------------------------
[...truncated 340.22 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 12:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2555889397093215436.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zIVKvBbXeSgmPaO1H_VRjHDJIHN4eyEfstIOd_AXtRQ.jar
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2f37509368649600704f3cda080a3411812fee0625419264c5019a15c17f78d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LzdQk2hklgBwTzzaCAo0EYEv7gYlQZJkxQGaFcF_eNY.pb
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_17_45_32-15240630194847343964?project=apache-beam-testing
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_17_45_32-15240630194847343964
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_17_45_32-15240630194847343964
    Oct 08, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T00:45:36.025Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.194Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.871Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.940Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.971Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.054Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.094Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.120Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.534Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.671Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:14.855Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:28.464Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:53.400Z: Workers have started successfully.
    Oct 08, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:53.432Z: Workers have started successfully.
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:21.774Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:21.987Z: Cleaning up.
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:22.137Z: Stopping worker pool...
    Oct 08, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:49:44.857Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:49:44.906Z: Worker pool stopped.
    Oct 08, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_17_45_32-15240630194847343964 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 94b03e79-18c4-4bc7-b87d-e20ef1229ac1 and timestamp: 2021-10-08T00:49:52.779000000Z:
                     Metric:                    Value:
                   read_time                     9.281
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 38.699 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/pnfornx7rifqe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2517

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2517/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Remove the overhead of SpecMonitoringInfoValidator

[noreply] Minor: Replace generic external.py links in multi-language documentation

[kawaigin] Updated screendiff integration test golden screenshots.


------------------------------------------
[...truncated 342.00 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 6:49:31 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 6:49:31 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 6:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6620371285689845389.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Jl0DvZDCGeVJjiJ89Td9PnX99FmVfDHkZEWAmaMfOb4.jar
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash d626e53be9a553be32ec908c07115f0e438a48d125e3fd3cf14675da27ef69d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1iblO-mlU74y7JCMBxFfDkOKSNEl4_088UZ12ifvadM.pb
    Oct 07, 2021 6:49:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_11_49_44-8045755234887827204?project=apache-beam-testing
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_11_49_44-8045755234887827204
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_11_49_44-8045755234887827204
    Oct 07, 2021 6:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T18:49:47.676Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:54.419Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.084Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.120Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.153Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.217Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.268Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.300Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.593Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:49:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.656Z: Starting 5 workers in us-central1-a...
    Oct 07, 2021 6:50:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:19.678Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 6:50:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:32.955Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:50:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:32.988Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 07, 2021 6:50:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:43.245Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:51:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:07.455Z: Workers have started successfully.
    Oct 07, 2021 6:51:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:07.485Z: Workers have started successfully.
    Oct 07, 2021 6:51:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.555Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.685Z: Cleaning up.
    Oct 07, 2021 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.757Z: Stopping worker pool...
    Oct 07, 2021 6:54:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:53:58.752Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 6:54:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:53:58.795Z: Worker pool stopped.
    Oct 07, 2021 6:54:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_11_49_44-8045755234887827204 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7be44caf-f1ee-4388-ae97-a48e2aab0939 and timestamp: 2021-10-07T18:54:03.966000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.878

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:54:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 37.459 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/mkufw5o2a6ps6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2516

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2516/display/redirect>

Changes:


------------------------------------------
[...truncated 337.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2a0449772da4ccb76cdfc954d2c4c73f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7672760343491070526.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0AqosBYPr7A4cZyEECcieDmxYVkNmN6SWWlTwx80DW0.jar
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c817d4a4b3e9441df5a809d7242b495b8772e2cd04ab217c8006c69f5414571f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yBfUpLPpRB31qAnXJCtJW4dy4s0EqyF8gAbGn1QUVx8.pb
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_05_45_08-10394073095498599519?project=apache-beam-testing
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_05_45_08-10394073095498599519
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_05_45_08-10394073095498599519
    Oct 07, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T12:45:13.773Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.097Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.882Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.925Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.955Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.025Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.055Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.089Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.442Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.555Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:40.896Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:58.920Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:58.960Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 07, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:09.223Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:33.006Z: Workers have started successfully.
    Oct 07, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:33.032Z: Workers have started successfully.
    Oct 07, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:02.855Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:03.059Z: Cleaning up.
    Oct 07, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:03.174Z: Stopping worker pool...
    Oct 07, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:49:26.217Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:49:26.281Z: Worker pool stopped.
    Oct 07, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_05_45_08-10394073095498599519 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6e527e5f-1aa7-4e38-b4f8-6a4e5071e8f0 and timestamp: 2021-10-07T12:49:31.491000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.822

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 40.306 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b4yfzyadkldzm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2515

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2515/display/redirect?page=changes>

Changes:

[david.prieto] [BEAM-12950] Not delete orphaned files to avoid missing events

[david.prieto] [BEAM-12950] Add Bug fix description to CHANGES.md

[david.prieto] [BEAM-12950] fix linter issues

[david.prieto] [BEAN-12950] Skip unit test


------------------------------------------
[...truncated 349.43 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2a0449772da4ccb76cdfc954d2c4c73f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 6:46:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 6:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 6:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 6:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5720442946869724837.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fOJgyedDSivd487OB6ea0m91XAucy7BcTxFfRD88Ksw.jar
    Oct 07, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 276fb88503a5f9bc2605353dd1d58a0cef2327ac6845a1f630de511809dd6b42> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J2-4hQOl-bwmBTU90dWKDO8jJ6xoRaH2MN5RGAnda0I.pb
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_23_46_26-8236915367545468161?project=apache-beam-testing
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_23_46_26-8236915367545468161
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_23_46_26-8236915367545468161
    Oct 07, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T06:46:29.807Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:37.257Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.194Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.227Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.263Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.335Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.362Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.394Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.737Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.794Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:55.147Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:18.969Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:44.306Z: Workers have started successfully.
    Oct 07, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:44.342Z: Workers have started successfully.
    Oct 07, 2021 6:48:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.342Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.478Z: Cleaning up.
    Oct 07, 2021 6:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.560Z: Stopping worker pool...
    Oct 07, 2021 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:50:36.653Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:50:36.706Z: Worker pool stopped.
    Oct 07, 2021 6:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_23_46_26-8236915367545468161 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6dd3dbd4-3dfe-4a18-b85b-369cd13c4e57 and timestamp: 2021-10-07T06:50:42.731000000Z:
                     Metric:                    Value:
                   read_time                     8.653
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:50:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 37.67 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 25s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/aijag52hzjdly

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2514

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2514/display/redirect?page=changes>

Changes:

[stefan.istrate] Fix "too many pings" errors.

[stefan.istrate] Increase keepalive timeout to 5 minutes.

[Robert Bradshaw] Dead letter option.

[Robert Bradshaw] Guard setup.py logic with __main__ condition.

[stefan.istrate] Fix yapf complaints.

[aydar.zaynutdinov] [BEAM-12969] [Playground]

[Robert Bradshaw] Add the ability to use subprocesses with the dead letter queue.

[Robert Bradshaw] Support multi-output DoFns.

[noreply] Merge pull request #15510 from [BEAM-12883] Add coder for

[Robert Bradshaw] multi-output fix

[Robert Bradshaw] Add thresholding to dead letter pattern.

[Robert Bradshaw] treshold test fixes

[Robert Bradshaw] Better naming, documentation.

[noreply] [BEAM-12482] Ensure that we ignore schema update options when loading

[Kyle Weaver] Moving to 2.35.0-SNAPSHOT on master branch.

[Kyle Weaver] Add 2.35.0 section to changelog.

[noreply] Fix email links in the contact page


------------------------------------------
[...truncated 349.20 KB...]
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 12:52:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 12:52:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 12:52:08 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 12:52:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 12:52:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-ZgoREHN3SqOQclIGldjYWlJZYlnicfJ-w-LJBzxpP7U.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3661911375608363829.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Glt5vUeHYyRGwrJxMcjzgrP8jbhpSsVp0FNZwV7fzoo.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-__o_cT2WEL8WknZtwTW8aQevksYt5UgSYWmasm174nQ.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-4bF8QpaxkWlkdPssX5OV0S9rIzFuRU0pojrN2fBwjjk.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 241 files cached, 9 files newly uploaded in 0 seconds
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2cc5f020f31bc53b80cca906bd659a4b383ab008490887abdca951b75fcb9b02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LMXwIPMbxTuAzKkGvWWaSzg6sAhJCIer3KlRt1_LmwI.pb
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_17_52_21-9042450593588474239?project=apache-beam-testing
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_17_52_21-9042450593588474239
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_17_52_21-9042450593588474239
    Oct 07, 2021 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T00:52:25.042Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:35.558Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.543Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.610Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.646Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.716Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.749Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.804Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:37.205Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:37.284Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 12:52:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:50.577Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 12:53:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:22.376Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:53:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:48.961Z: Workers have started successfully.
    Oct 07, 2021 12:53:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:48.990Z: Workers have started successfully.
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.283Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.547Z: Cleaning up.
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.639Z: Stopping worker pool...
    Oct 07, 2021 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:56:47.140Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:56:47.198Z: Worker pool stopped.
    Oct 07, 2021 12:56:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_17_52_21-9042450593588474239 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fecbac54-db59-483e-a95e-d486ca08c0d6 and timestamp: 2021-10-07T00:56:53.817000000Z:
                     Metric:                    Value:
                   read_time                     8.459
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:56:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 50.461 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 12s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/arjb4enc6ighm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2513

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2513/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Restore "Default to Runner v2 for Python Streaming jobs. (#15140)"

[Robert Bradshaw] Avoid incompatible setting.

[noreply] [BEAM-12909][BEAM-12849]  Add support for running spark3 nexmark queries


------------------------------------------
[...truncated 344.47 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c4761723fbcbd113bf133425c2d1ed59
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 6:45:23 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test271684441090064629.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-S1tvsTxoMyC9bA_SXB-_80Taz36u53aDF2cA4oIbfbE.jar
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c6914d994f39b550a3132e4e4ca6f872bcd01899e9b6c0a5cf8d03f1e5a1ea5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DGkU2ZTzm1UKMTLk5MpvhyvNAYmem2wKXPjQPx5aHqU.pb
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_11_45_37-4958870658299751609?project=apache-beam-testing
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_11_45_37-4958870658299751609
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_11_45_37-4958870658299751609
    Oct 06, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T18:45:41.035Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:47.718Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.023Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.055Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.080Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.161Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.193Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.223Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.549Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.615Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 6:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:46:15.626Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:46:38.054Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:04.178Z: Workers have started successfully.
    Oct 06, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:04.219Z: Workers have started successfully.
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:33.910Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:34.116Z: Cleaning up.
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:34.217Z: Stopping worker pool...
    Oct 06, 2021 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:49:52.921Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:49:52.980Z: Worker pool stopped.
    Oct 06, 2021 6:49:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_11_45_37-4958870658299751609 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e260e0c1-d250-44b7-aa11-903d6bed5bf5 and timestamp: 2021-10-06T18:49:59.289000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.493

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:49:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.25 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/c3bggcoowgbao

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2512

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2512/display/redirect>

Changes:


------------------------------------------
[...truncated 336.83 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7996053131578164699.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cIDK8C9p7FYllt4Tz1Fm4_Qi2rZjBEi8elnaJdsXbm0.jar
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a80ee476681df5da9f2022da40624e9f50f5de28a5d5a8d5a48a17d188fa3c4d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qA7kdmgd9dqfICLaQGJOn1D13iil1ajVpIoX0Yj6PE0.pb
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_05_45_03-8741306478298111401?project=apache-beam-testing
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_05_45_03-8741306478298111401
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_05_45_03-8741306478298111401
    Oct 06, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T12:45:06.681Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:13.434Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.058Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.100Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.142Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.218Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.245Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.277Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.650Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.746Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:21.604Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:53.905Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:18.004Z: Workers have started successfully.
    Oct 06, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:18.039Z: Workers have started successfully.
    Oct 06, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.684Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.860Z: Cleaning up.
    Oct 06, 2021 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.987Z: Stopping worker pool...
    Oct 06, 2021 12:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:48:59.467Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 12:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:48:59.544Z: Worker pool stopped.
    Oct 06, 2021 12:49:05 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_05_45_03-8741306478298111401 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 35cf7617-2b4f-4671-ade3-fffbe82e37ef and timestamp: 2021-10-06T12:49:05.587000000Z:
                     Metric:                    Value:
                   read_time                     7.461
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:49:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 18.959 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 48s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/w3sbgdw5q6ji6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2511

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2511/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15602 from [BEAM-10917] Add support for BigQuery


------------------------------------------
[...truncated 339.81 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 6:45:14 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@415755770]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@193331377]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7509582736706140335.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1M2W-FTlaVjjYFU-04baqCJsRh706SQMfQITWxLcTGk.jar
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 133be6250525ccddfac64adec29fbb5ff4c5f4c589c6f3c039486c04f0668db3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EzvmJQUlzN36xkrewp-7X_TF9MWJxvPAOUhsBPBmjbM.pb
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_23_45_33-10813076697480166768?project=apache-beam-testing
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_23_45_33-10813076697480166768
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_23_45_33-10813076697480166768
    Oct 06, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T06:45:36.315Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:42.248Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:42.985Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.045Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.073Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.147Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.182Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.203Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.565Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.643Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:51.896Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:27.982Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:53.581Z: Workers have started successfully.
    Oct 06, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:53.634Z: Workers have started successfully.
    Oct 06, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.072Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.332Z: Cleaning up.
    Oct 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.437Z: Stopping worker pool...
    Oct 06, 2021 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:49:43.950Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:49:44.022Z: Worker pool stopped.
    Oct 06, 2021 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_23_45_33-10813076697480166768 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7d50295c-1eaf-4146-8046-05026846ba4a and timestamp: 2021-10-06T06:49:51.030000000Z:
                     Metric:                    Value:
                   read_time                      8.51
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 43.117 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/4hswe7wmgbayy

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2510

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2510/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11831] Parially Revert "[BEAM-11805] Replace user-agent for

[kawaigin] [BEAM-10708] Enable submit beam_sql built jobs to Dataflow


------------------------------------------
[...truncated 342.77 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 12:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test265651886755238909.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xzJi6obHPmtYTjhps4MRyikZ1fKeBXN5TBEfa7UQnNI.jar
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 76a702490cc4c2876299bf32ebc6cf89638a6c8e5c338dc20679ca9b86edb72a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dqcCSQzEwodimb8y68bPiWOKbI5cM43CBnnKm4bttyo.pb
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_17_45_32-14845024658545748933?project=apache-beam-testing
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_17_45_32-14845024658545748933
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_17_45_32-14845024658545748933
    Oct 06, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T00:45:35.635Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:41.176Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:41.977Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.027Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.104Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.181Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.224Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.259Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.689Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.788Z: Starting 5 workers in us-central1-a...
    Oct 06, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:50.009Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:18.798Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:18.840Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 06, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:29.199Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:51.921Z: Workers have started successfully.
    Oct 06, 2021 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:51.961Z: Workers have started successfully.
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.453Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.600Z: Cleaning up.
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.679Z: Stopping worker pool...
    Oct 06, 2021 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:49:39.010Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:49:39.054Z: Worker pool stopped.
    Oct 06, 2021 12:49:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_17_45_32-14845024658545748933 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bb49b25e-82a3-4aff-b301-a0c9f02e9eac and timestamp: 2021-10-06T00:49:45.460000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.883

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:49:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 31.775 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/hzyumsnlojkwu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2509

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2509/display/redirect?page=changes>

Changes:

[noreply] Only stage the single (fat) jar when auto-starting expansion service.

[noreply] Disable samza counters (#15659)

[noreply] Merge pull request #15614 from [BEAM-12953] [Playground] Create protobuf


------------------------------------------
[...truncated 338.35 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4da0af97686d2574b64cfb59e094a515
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5480627460130910001.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gexQFT7luw6fjnkX6r1qRcNB99zZRM4fAmsKNQZX4zM.jar
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7fd0ee2c45d319bb9d2963ba7d8ed7fb8d5e7956f6036b63e9214efe0dd9a35f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-f9DuLEXTGbudKWO6fY7X-41eeVb2A2tj6SFO_g3Zo18.pb
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_11_45_07-6619070403620490252?project=apache-beam-testing
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_11_45_07-6619070403620490252
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_11_45_07-6619070403620490252
    Oct 05, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T18:45:11.030Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:19.658Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.471Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.513Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.538Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.601Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.632Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.660Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:21.077Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:21.160Z: Starting 5 workers in us-central1-c...
    Oct 05, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:24.042Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:52.736Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:52.816Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:03.089Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:27.595Z: Workers have started successfully.
    Oct 05, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:27.628Z: Workers have started successfully.
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.431Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.609Z: Cleaning up.
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.696Z: Stopping worker pool...
    Oct 05, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:49:20.195Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:49:20.306Z: Worker pool stopped.
    Oct 05, 2021 6:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_11_45_07-6619070403620490252 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9d7f9fdd-656d-4391-84f0-482b42536868 and timestamp: 2021-10-05T18:49:26.334000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.483

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 37.66 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/liugw37s2jtry

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2508

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2508/display/redirect>

Changes:


------------------------------------------
[...truncated 340.63 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0363f80a40f7f671425e1f90539807f1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 12:45:16 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5893968782445842855.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-m8M_J_ys9Ux59DAoW_KcMKuRMM3_iQ3XvUH9zOOWMDc.jar
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8c05ef4335ada3ce545c421f978e10b862601d93f2bbccd699d6f1c59e0e06af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jAXvQzWto85UXEIfl44QuGJgHZPyu8zWmdbxxZ4OBq8.pb
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_05_45_29-8122112085929557922?project=apache-beam-testing
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_05_45_29-8122112085929557922
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_05_45_29-8122112085929557922
    Oct 05, 2021 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T12:45:32.765Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:41.360Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.213Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.260Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.291Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.368Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.399Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.431Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.856Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.935Z: Starting 5 workers in us-central1-c...
    Oct 05, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:06.203Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:23.506Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:49.873Z: Workers have started successfully.
    Oct 05, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:49.906Z: Workers have started successfully.
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:18.939Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:19.123Z: Cleaning up.
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:19.220Z: Stopping worker pool...
    Oct 05, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:49:42.684Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:49:42.811Z: Worker pool stopped.
    Oct 05, 2021 12:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_05_45_29-8122112085929557922 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7b4ddd42-b28f-4078-a4f0-b3d53bf5eeb1 and timestamp: 2021-10-05T12:49:48.267000000Z:
                     Metric:                    Value:
                   read_time                     7.253
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.56 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/fzj654rglq6go

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2507

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2507/display/redirect?page=changes>

Changes:

[danthev] Switch hintNumWorkers to ValueProvider, switch firstInstant to side


------------------------------------------
[...truncated 340.85 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0363f80a40f7f671425e1f90539807f1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 6:45:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3677606633200463572.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-J0qIFCndQlEv1OSRlUg532mtw9zW0K-tdB3VO1f8VDM.jar
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash f1893edfcc0c9153f4c1ed74317021930e8f7bebc716d5de610c0b2bad231f6d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8Yk-38wMkVP0we10MXAhkw6Pe-vHFtXeYQwLK60jH20.pb
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_23_45_30-8067968371807703600?project=apache-beam-testing
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_23_45_30-8067968371807703600
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_23_45_30-8067968371807703600
    Oct 05, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T06:45:33.916Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:39.500Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.214Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.243Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.273Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.324Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.350Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.372Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.712Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.777Z: Starting 5 workers in us-central1-a...
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:11.862Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:12.737Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:12.764Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 6:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:23.161Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:46.637Z: Workers have started successfully.
    Oct 05, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:46.668Z: Workers have started successfully.
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.236Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.383Z: Cleaning up.
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.455Z: Stopping worker pool...
    Oct 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:49:49.190Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:49:49.236Z: Worker pool stopped.
    Oct 05, 2021 6:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_23_45_30-8067968371807703600 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 85992a32-d12e-4eea-9898-f4c3c314fa83 and timestamp: 2021-10-05T06:49:54.576000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.463

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 42.1 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/254skpo3o3wta

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2506

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2506/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15618 from [BEAM-12968] [Playground] Create README

[noreply] Merge pull request #15626 from [BEAM-12963] [Playground] Create base

[noreply] Merge pull request #15566 from [BEAM-12925] Correct behavior retrieving

[noreply] [BEAM-12911] Update schema translation (Java, Python) to log more

[noreply] Update CHANGES.md for JdbcIO breaking change (#15651)

[noreply] [BEAM-12513] Add Go SDK metrics content to BPG. (#15650)

[noreply] [BEAM-12996] Improve Error Logging in ConfigBuilder (#15646)

[noreply] [BEAM-13000] Disable Reshuffle Translation in Samza Portable Mode


------------------------------------------
[...truncated 344.11 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 12:55:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 12:55:37 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 12:55:38 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 12:55:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1841175909]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 12:55:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 12:55:55 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 12:55:55 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 12:55:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-LQIYNIp1aM1u2157NJfFX8iQNiKTt9bu1VkD5-famiI.jar
    Oct 05, 2021 12:55:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2905102385829149055.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qry4rld43sBtAEmV2g9xQeuPtvgzWKjkZgDWwNDH-74.jar
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ec34d17a8ce4cad13c8ee40471eaa78b94eed0c533288b90b67ce9e72adf42bc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7DTReozkytE8juQEceqni5Tu0MUzKIuQtnzp5yrfQrw.pb
    Oct 05, 2021 12:56:01 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_17_56_02-1799180745036754217?project=apache-beam-testing
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_17_56_02-1799180745036754217
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_17_56_02-1799180745036754217
    Oct 05, 2021 12:56:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T00:56:06.569Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:14.817Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.692Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.721Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.738Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.790Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.818Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.857Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:16.211Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:56:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:16.308Z: Starting 5 workers in us-central1-a...
    Oct 05, 2021 12:56:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:48.383Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 12:56:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:54.666Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:56:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:54.700Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 12:57:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:05.000Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:57:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:28.636Z: Workers have started successfully.
    Oct 05, 2021 12:57:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:28.664Z: Workers have started successfully.
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.211Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.350Z: Cleaning up.
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.430Z: Stopping worker pool...
    Oct 05, 2021 1:00:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T01:00:22.209Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 1:00:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T01:00:22.254Z: Worker pool stopped.
    Oct 05, 2021 1:00:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_17_56_02-1799180745036754217 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f0f84e75-81fd-42c7-81a1-abfc1b6ee917 and timestamp: 2021-10-05T01:00:29.759000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.162


Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 1:00:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.041 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.074 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 0.158 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 36s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/6wpswrc3eggoq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2505

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2505/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11516] Upgrade to pylint 2.11.1, fix warnings (#15612)

[noreply] [BEAM-12979, BEAM-11097] Change cache to store ReStreams, clean up to…

[noreply] [BEAM-3304, BEAM-12513] Trigger changes and Windowing. (#15644)


------------------------------------------
[...truncated 345.64 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 6:45:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test72140529376117676.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PR_BJ3ikKm-6LOqRykJusDqgJsTe7zSj7eRmfQJgYoE.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash bd8630fd7f753a33f54403bf63981e523a3aabf1206713538442f3b9b568f836> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vYYw_X91OjP1RAO_Y5geUjo6q_EgZxNThELzubVo-DY.pb
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_11_45_33-4273172218913284806?project=apache-beam-testing
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_11_45_33-4273172218913284806
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_11_45_33-4273172218913284806
    Oct 04, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T18:45:36.636Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:45.198Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.072Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.102Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.131Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.190Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.221Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.248Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.559Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.638Z: Starting 5 workers in us-central1-a...
    Oct 04, 2021 6:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:50.114Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:33.411Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:33.438Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 04, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:43.775Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:07.994Z: Workers have started successfully.
    Oct 04, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:08.018Z: Workers have started successfully.
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.848Z: Cleaning up.
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.932Z: Stopping worker pool...
    Oct 04, 2021 6:50:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:50:09.480Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 6:50:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:50:09.522Z: Worker pool stopped.
    Oct 04, 2021 6:50:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_11_45_33-4273172218913284806 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e862aec7-8559-413b-acc5-561fea98dfe5 and timestamp: 2021-10-04T18:50:15.536000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.527

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:50:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 0.526 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/cm55novmyco2i

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2504

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2504/display/redirect>

Changes:


------------------------------------------
[...truncated 337.89 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1815534418436434907.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-a-IMR2wQSFe-GFwdBz22Tbp0Uso2DoIkALbtTEexiH4.jar
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 64d76306aac984008070e125d6aecc21a1e45f4b6708069d0f55aae50b3d5215> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ZNdjBqrJhACAcOEl1q7MIaHkX0tnCAadD1Wq5Qs9UhU.pb
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_05_45_03-2710652158831162577?project=apache-beam-testing
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_05_45_03-2710652158831162577
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_05_45_03-2710652158831162577
    Oct 04, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T12:45:06.952Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:14.538Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.376Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.405Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.442Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.518Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.543Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.575Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.970Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:16.039Z: Starting 5 workers in us-central1-c...
    Oct 04, 2021 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:40.544Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:01.393Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:24.847Z: Workers have started successfully.
    Oct 04, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:24.886Z: Workers have started successfully.
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.745Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.916Z: Cleaning up.
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.985Z: Stopping worker pool...
    Oct 04, 2021 12:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:49:20.778Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 12:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:49:20.825Z: Worker pool stopped.
    Oct 04, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_05_45_03-2710652158831162577 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bcbc0d4f-976c-4cfc-b1fb-b0f75a228cab and timestamp: 2021-10-04T12:49:27.844000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.195

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.167 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/d6xt5x6bxlooa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2503

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2503/display/redirect>

Changes:


------------------------------------------
[...truncated 337.87 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2667136400420258841.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Bb_Z-0M6T2LgCreRBtcaIHOWYi3qftdkG1mTO2GWyaE.jar
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 450a2558d4479186103647f4b0977d861861ccfd1a0c98b1bfbd0338e3ebb648> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RQolWNRHkYYQNkf0sJd9hhhhzP0aDJixv70DOOPrtkg.pb
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_23_45_05-12198235828307472321?project=apache-beam-testing
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_23_45_05-12198235828307472321
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_23_45_05-12198235828307472321
    Oct 04, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T06:45:08.658Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:14.788Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.499Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.536Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.562Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.625Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.650Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.681Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:16.006Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:16.080Z: Starting 5 workers in us-central1-c...
    Oct 04, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:47.382Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:49.257Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:23.209Z: Workers have started successfully.
    Oct 04, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:23.232Z: Workers have started successfully.
    Oct 04, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.130Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.263Z: Cleaning up.
    Oct 04, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.342Z: Stopping worker pool...
    Oct 04, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:49:12.904Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:49:12.942Z: Worker pool stopped.
    Oct 04, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_23_45_05-12198235828307472321 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c0ca4b49-f9cd-421a-b444-ecae683e9e86 and timestamp: 2021-10-04T06:49:18.324000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.498

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 29.982 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gk3pnulmpgp4c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2502

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2502/display/redirect>

Changes:


------------------------------------------
[...truncated 338.43 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7640060808782137549.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_O0Ja9OWe15cU_t011NGBV1a8M3VwOqAjZpVDkg-qPI.jar
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 83c757b56fb1694270240bedcfac0542621ff2ca66dffa6368a7e9589c7cfa84> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-g8dXtW-xaUJwJAvtz6wFQmIf8spm3_pjaKfpWJx8-oQ.pb
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_17_45_04-1600347619784672286?project=apache-beam-testing
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_17_45_04-1600347619784672286
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_17_45_04-1600347619784672286
    Oct 04, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T00:45:08.100Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:13.680Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.480Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.520Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.548Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.608Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.634Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.665Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:15.047Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:15.116Z: Starting 5 workers in us-central1-a...
    Oct 04, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:24.474Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:00.879Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:00.910Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 04, 2021 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:11.234Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:30.448Z: Workers have started successfully.
    Oct 04, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:30.483Z: Workers have started successfully.
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.280Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.426Z: Cleaning up.
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.533Z: Stopping worker pool...
    Oct 04, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:49:24.802Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:49:24.839Z: Worker pool stopped.
    Oct 04, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_17_45_04-1600347619784672286 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c9148af6-bf85-40a5-9cc7-b8a40026c2c7 and timestamp: 2021-10-04T00:49:32.632000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      6.64

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 44.444 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/g4r7rxbdhxnhe

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2501

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2501/display/redirect>

Changes:


------------------------------------------
[...truncated 337.57 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6261414022396518308.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xJD2kEJYDxv8ItFUGHq5jCW2mFsTZLqZNCbcpTtZIdw.jar
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ccc9bff7f84cd21d2250b39f5ffc744f971e678c21dd82e9577193ff30d1f315> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zMm_9_hM0h0iULOfX_x0T5ceZ4wh3YLpV3GT_zDR8xU.pb
    Oct 03, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_11_45_03-16988647387276380313?project=apache-beam-testing
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_11_45_03-16988647387276380313
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_11_45_03-16988647387276380313
    Oct 03, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T18:45:06.612Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:12.472Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.365Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.404Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.432Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.518Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.568Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.600Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.973Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:14.040Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:30.087Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:59.131Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:26.314Z: Workers have started successfully.
    Oct 03, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:26.339Z: Workers have started successfully.
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.079Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.217Z: Cleaning up.
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.283Z: Stopping worker pool...
    Oct 03, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:49:24.186Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:49:24.227Z: Worker pool stopped.
    Oct 03, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_11_45_03-16988647387276380313 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8d2be147-4c96-4c97-b094-d3156fae0888 and timestamp: 2021-10-03T18:49:30.564000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.042

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 44.139 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/k6l67cyngzccm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2500

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2500/display/redirect>

Changes:


------------------------------------------
[...truncated 339.36 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 12:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4433133106638732651.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Qf3UilYOzWOXyJlTGj3a4ol8g6kpJ5ZkS07Lg0j4f8o.jar
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f8eafff2db1b48bf88dc40919dd0ddd0941d770305d4e73ac56de9d1f42425b1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--Or_8tsbSL-I3ECRndDd0JQddwMF1Oc6xW3p0fQkJbE.pb
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_05_45_16-12243917676759023265?project=apache-beam-testing
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_05_45_16-12243917676759023265
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_05_45_16-12243917676759023265
    Oct 03, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T12:45:21.940Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.223Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.951Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.988Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.036Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.096Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.121Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.158Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.502Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.582Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:50.717Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:13.091Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:13.122Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 03, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:23.442Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:43.951Z: Workers have started successfully.
    Oct 03, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:43.988Z: Workers have started successfully.
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.720Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.861Z: Cleaning up.
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.948Z: Stopping worker pool...
    Oct 03, 2021 12:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:49:36.439Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 12:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:49:36.483Z: Worker pool stopped.
    Oct 03, 2021 12:49:41 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_05_45_16-12243917676759023265 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4b515cf7-04e3-4a61-8f3a-0f1697163c65 and timestamp: 2021-10-03T12:49:41.704000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.184

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 44.84 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gg7przbcx7dfs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2499

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2499/display/redirect>

Changes:


------------------------------------------
[...truncated 340.28 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5527341037863145142.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ldCjmuz0bi2z703eO-kB9-TVejm2o8cJKIVQDSJHXi0.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c6679677be01e2c3c4bb0d49fc728ab65ece3941587bdd46075f16e07952e5a1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xmeWd74B4sPEuw1J_HKKtl7OOUFYe91GB18W4HlS5aE.pb
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_23_45_09-11935331738148933063?project=apache-beam-testing
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_23_45_09-11935331738148933063
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_23_45_09-11935331738148933063
    Oct 03, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T06:45:12.715Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.220Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.870Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.906Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.932Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.995Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.022Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.042Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.385Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.461Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:35.885Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:01.146Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:27.186Z: Workers have started successfully.
    Oct 03, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:27.216Z: Workers have started successfully.
    Oct 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.302Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.426Z: Cleaning up.
    Oct 03, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.501Z: Stopping worker pool...
    Oct 03, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:49:23.087Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:49:23.136Z: Worker pool stopped.
    Oct 03, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_23_45_09-11935331738148933063 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): aa063f72-c3a3-4f32-81b6-cc209e47ed4e and timestamp: 2021-10-03T06:49:30.085000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.325

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 38.779 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wsd6paxz33bg2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2498

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2498/display/redirect>

Changes:


------------------------------------------
[...truncated 339.30 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test924415722746679332.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--TrSaHjGfQDLqi9c4923o6CG9tiTeH0tOQo81B3STRw.jar
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 2378d85467543a1a3b5e0d3948b43275ba3f975a1b25eef73fbda532a3bf4a17> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-I3jYVGdUOho7Xg05SLQydbo_l1obJe73P72lMqO_Shc.pb
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_17_45_08-847305522168178165?project=apache-beam-testing
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_17_45_08-847305522168178165
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_17_45_08-847305522168178165
    Oct 03, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T00:45:14.054Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:19.984Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.834Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.876Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.911Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.984Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.010Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.041Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.390Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.446Z: Starting 5 workers in us-central1-c...
    Oct 03, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:34.512Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:05.764Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:31.377Z: Workers have started successfully.
    Oct 03, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:31.412Z: Workers have started successfully.
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.267Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.464Z: Cleaning up.
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.538Z: Stopping worker pool...
    Oct 03, 2021 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:49:24.875Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:49:24.919Z: Worker pool stopped.
    Oct 03, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_17_45_08-847305522168178165 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 76ceceec-88f9-4777-8b6c-bd8bf58d3eeb and timestamp: 2021-10-03T00:49:32.184000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.986

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 40.668 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/koe2y4us665ni

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2497

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2497/display/redirect>

Changes:


------------------------------------------
[...truncated 338.04 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70049737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@138522610]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2270684606588946108.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aI3GlyaS01dmmjHAWA8z2L4kT-2mYbi05Qb1Pzn6Bas.jar
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7e76b48122056d8d57589b41fd006d6f26ee28106fa96c9e64e7d6cf4024fe99> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fna0gSIFbY1XWJtB_QBtbybuKBBvqWyeZOfWz0Ak_pk.pb
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_11_45_03-10805159815140293046?project=apache-beam-testing
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_11_45_03-10805159815140293046
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_11_45_03-10805159815140293046
    Oct 02, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T18:45:07.210Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.122Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.765Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.803Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.835Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.904Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.920Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.959Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:13.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:13.348Z: Starting 5 workers in us-central1-a...
    Oct 02, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:43.222Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:59.456Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:24.817Z: Workers have started successfully.
    Oct 02, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:24.842Z: Workers have started successfully.
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.107Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.231Z: Cleaning up.
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.302Z: Stopping worker pool...
    Oct 02, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:49:16.327Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:49:16.366Z: Worker pool stopped.
    Oct 02, 2021 6:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_11_45_03-10805159815140293046 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 50b2eac8-5528-42f0-8769-ddffd4ba8580 and timestamp: 2021-10-02T18:49:24.414000000Z:
                     Metric:                    Value:
                   read_time                     6.322
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:49:24 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 37.51 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/opgofo7adrz4o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2496

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2496/display/redirect>

Changes:


------------------------------------------
[...truncated 339.09 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test33187000213534086.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-s0Ua8XEPsFQPPuOm96uiigzcXG-39oKQ9e8AXeUrHS8.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0dc756c4b389ce0fa11003af05bca7f297ddf24ca271270ad5807b7d2e746487> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DcdWxLOJzg-hEAOvBbyn8pfd8kyicScK1YB7fS50ZIc.pb
    Oct 02, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_05_45_07-11784265171889707970?project=apache-beam-testing
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_05_45_07-11784265171889707970
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_05_45_07-11784265171889707970
    Oct 02, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T12:45:10.607Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.109Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.936Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.968Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.004Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.081Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.107Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.144Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.480Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.550Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:44.525Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:49.478Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:49.509Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 02, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:59.755Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:23.466Z: Workers have started successfully.
    Oct 02, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:23.500Z: Workers have started successfully.
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.468Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.591Z: Cleaning up.
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.665Z: Stopping worker pool...
    Oct 02, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:49:17.724Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:49:17.772Z: Worker pool stopped.
    Oct 02, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_05_45_07-11784265171889707970 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dedadd50-3e74-47e6-9654-b427973b8b6b and timestamp: 2021-10-02T12:49:24.131000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.222

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:49:24 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 34.872 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yxcpdea4k4l5w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2495

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2495/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12513] Schemas and Coders (#15632)


------------------------------------------
[...truncated 337.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3291093574128526183.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-JQLYC_B7axlOWx2YCYh9nveYzWLbXvACd1k_Cr3TckU.jar
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cdf0481c711dd159373ac3b3bc44104c72ecebf31f065a5590122e6ffd5e8ed7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zfBIHHEd0Vk3OsOzvEQQTHLs6_MfBlpVkBIub_1ejtc.pb
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_23_45_03-2032330720690453059?project=apache-beam-testing
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_23_45_03-2032330720690453059
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_23_45_03-2032330720690453059
    Oct 02, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T06:45:06.938Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:13.276Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.015Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.055Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.081Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.144Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.170Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.203Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.506Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.572Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:37.970Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:49.961Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:24.150Z: Workers have started successfully.
    Oct 02, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:24.180Z: Workers have started successfully.
    Oct 02, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:51.961Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:52.111Z: Cleaning up.
    Oct 02, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:52.184Z: Stopping worker pool...
    Oct 02, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:49:12.021Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:49:12.123Z: Worker pool stopped.
    Oct 02, 2021 6:49:17 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_23_45_03-2032330720690453059 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 548c4b23-6edb-4267-9ef5-fe7adade5bf8 and timestamp: 2021-10-02T06:49:17.096000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.932

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:49:17 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 30.558 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 0s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wqhy3kqscxoyc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2494

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2494/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11217] Implemented metrics filtering (#15482)


------------------------------------------
[...truncated 337.35 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3974640514448782353.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7HGRObmaSGVFS8QV2P8thwn5FmC6eCrknOuT0Bf2cyk.jar
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2bd99464ec2b6ff3616614c2a789ecb41b73f4d396080cab872b45206c25591d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-K9mUZOwrb_NhZhTCp4nstBtz9NOWCAyrhytFIGwlWR0.pb
    Oct 02, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_17_45_05-1229165175629021093?project=apache-beam-testing
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_17_45_05-1229165175629021093
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_17_45_05-1229165175629021093
    Oct 02, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T00:45:08.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:15.499Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.183Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.225Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.259Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.334Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.357Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.381Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.745Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.816Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:28.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:00.975Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:26.187Z: Workers have started successfully.
    Oct 02, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:26.212Z: Workers have started successfully.
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.483Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.671Z: Cleaning up.
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.794Z: Stopping worker pool...
    Oct 02, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:49:15.664Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:49:15.711Z: Worker pool stopped.
    Oct 02, 2021 12:49:20 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_17_45_05-1229165175629021093 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4f713dcc-a53e-44f9-bef7-5aaf4a26ab5a and timestamp: 2021-10-02T00:49:20.830000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.426

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 32.389 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rmy5gwhhtwwa2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2493

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2493/display/redirect?page=changes>

Changes:

[noreply] Switching to innerText and removing alerts

[noreply] [BEAM-12993] Update to Debezium 1.7.0.Final (#15636)

[noreply] Minor: Fix `Iterable[SplitResultResidual]` type errors (#15634)


------------------------------------------
[...truncated 338.04 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 6:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4824795570540795047.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZSZHzKZ7N_SQD-MZgfeZ7Ndjbohxq-Ok3LUbeo0IcVo.jar
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b3c4d4458a3186eb4ce590de1e748a9f4e5d5f53cc93309e911437bc92446ff9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-s8TURYoxhutM5ZDeHnSKn05dX1PMkzCekRQ3vJJEb_k.pb
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_11_45_13-142456295206191151?project=apache-beam-testing
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_11_45_13-142456295206191151
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_11_45_13-142456295206191151
    Oct 01, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T18:45:17.088Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.150Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.930Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.963Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.992Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.079Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.106Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.121Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.452Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.525Z: Starting 5 workers in us-central1-a...
    Oct 01, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:29.389Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:06.164Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:34.645Z: Workers have started successfully.
    Oct 01, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:34.687Z: Workers have started successfully.
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.442Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.597Z: Cleaning up.
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.670Z: Stopping worker pool...
    Oct 01, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:49:23.526Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:49:23.568Z: Worker pool stopped.
    Oct 01, 2021 6:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_11_45_13-142456295206191151 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 64c56dab-ce87-4057-b12d-fc9473e5b8f2 and timestamp: 2021-10-01T18:49:29.905000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.038

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:49:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 34.491 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rvugmzkytt6qg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2492

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2492/display/redirect>

Changes:


------------------------------------------
[...truncated 339.39 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 12:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4723206100083085531.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qEpWi-6a7u1_PK_4qWoBtbdwxjaeiG6zedVyYTv0MG0.jar
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 1 seconds
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 33ad4cc501abdb54fba5f37c03d455d33c6347eb4a1246cc2efe46128fcb61af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-M61MxQGr21T7pfN8A9RV0zxjR-tKEkbMLv5GEo_LYa8.pb
    Oct 01, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_05_45_11-5449097116956767890?project=apache-beam-testing
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_05_45_11-5449097116956767890
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_05_45_11-5449097116956767890
    Oct 01, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T12:45:16.076Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.067Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.891Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.937Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.972Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.073Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.099Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.140Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.644Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.756Z: Starting 5 workers in us-central1-c...
    Oct 01, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:48.950Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:10.070Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:34.013Z: Workers have started successfully.
    Oct 01, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:34.044Z: Workers have started successfully.
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.394Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.554Z: Cleaning up.
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.634Z: Stopping worker pool...
    Oct 01, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:49:20.174Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:49:20.237Z: Worker pool stopped.
    Oct 01, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_05_45_11-5449097116956767890 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d0ab1099-69b9-40ed-9191-e65610971fe5 and timestamp: 2021-10-01T12:49:27.722000000Z:
                     Metric:                    Value:
                   read_time                     9.199
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 34.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5jh474kjyd6vq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2491

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2491/display/redirect>

Changes:


------------------------------------------
[...truncated 337.78 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3299828175081092784.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7FktRZ62Gz2eHphKeQzeql6S6amjKEg1_JupBewy7bU.jar
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 44c58d35db3a08d70e83108a6ce29ae09f8ab7d0e8180073f4eb362973880439> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RMWNNds6CNcOgxCKbOKa4J-Kt9DoGABz9Os2KXOIBDk.pb
    Oct 01, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_23_45_17-15965796272908244087?project=apache-beam-testing
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_23_45_17-15965796272908244087
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_23_45_17-15965796272908244087
    Oct 01, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T06:45:20.619Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:27.388Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.080Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.121Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.157Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.232Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.266Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.299Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.979Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:29.066Z: Starting 5 workers in us-central1-c...
    Oct 01, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:56.913Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 6:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:12.087Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:39.907Z: Workers have started successfully.
    Oct 01, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:39.944Z: Workers have started successfully.
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.239Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.390Z: Cleaning up.
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.481Z: Stopping worker pool...
    Oct 01, 2021 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:49:38.376Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:49:38.417Z: Worker pool stopped.
    Oct 01, 2021 6:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_23_45_17-15965796272908244087 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f683088c-36be-4160-b47e-ac6bcabbe3bc and timestamp: 2021-10-01T06:49:46.231000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.766

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 53.614 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kbjuzj36jtjba

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2490

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2490/display/redirect?page=changes>

Changes:

[dpcollins] [BEAM-12908] Change to use PubsubSignal for information propagation so

[Robert Bradshaw] Preserve more types in transform replacement.

[Robert Bradshaw] Update test to reflect preserved type hint.

[noreply] [BEAM-11985] Python Bigtable - Implement IO Request Count metrics

[noreply] [BEAM-9918] Make TryCrossLanguage match non Try API (#15633)

[noreply] [BEAM-12957] Add support for pyarrow 5.x (#15588)


------------------------------------------
[...truncated 341.46 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 12:45:10 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-qOxdI4KUjo3GTP9rF7KeqZGuhs9XeyH-VdOKR6HUq6Q.jar
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1429108089442257848.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w7MG2wlG_XkuF6bjtEgQNkbrEnEKiP3pFRtFqP2J6pE.jar
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 0184a43e3e51cd74e76d0438af8bb9c64cb96b68908e74269f492224778065c7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AYSkPj5RzXTnbQQ4r4u5xky5a2iQjnQmn0kiJHeAZcc.pb
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_17_45_23-9121831774679467180?project=apache-beam-testing
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_17_45_23-9121831774679467180
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_17_45_23-9121831774679467180
    Oct 01, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T00:45:27.377Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:33.914Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 01, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.799Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.840Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.877Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.975Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.013Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.032Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.349Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.429Z: Starting 5 workers in us-central1-a...
    Oct 01, 2021 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:57.313Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 12:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:16.336Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:16.362Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 01, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:26.720Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:49.531Z: Workers have started successfully.
    Oct 01, 2021 12:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:49.548Z: Workers have started successfully.
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:17.957Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:18.091Z: Cleaning up.
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:18.173Z: Stopping worker pool...
    Oct 01, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:49:52.733Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:49:52.763Z: Worker pool stopped.
    Oct 01, 2021 12:49:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_17_45_23-9121831774679467180 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 47d31f15-03a8-4eb1-bea7-de151ad3e716 and timestamp: 2021-10-01T00:49:59.046000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.271

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 53.151 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ltxf7omg5btf6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2489

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2489/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-12951]: Create initial structure for Playground application

[rohde.samuel] Fix BEAM-12984

[noreply] Update Beam glossary (#15619)


------------------------------------------
[...truncated 344.31 KB...]
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 6:46:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 6:47:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 6:47:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@415755770]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@991913540]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 6:47:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7139866847520102503.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sUITyF-jIcOE0DPCp5g8ar_3Wzik35VTFkPaHmK9Nik.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Sep 30, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Sep 30, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 6 seconds
    Sep 30, 2021 6:47:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 6:47:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c9aef7afd66ec0f6fc4129cd0a16db636b491f19f7c80ca12c3146d9d700016e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ya73r9ZuwPb8QSnNChbbY2tJHxn3yAyhLDFG2dcAAW4.pb
    Sep 30, 2021 6:47:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 6:47:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 6:47:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_11_47_44-14843811000132432781?project=apache-beam-testing
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_11_47_44-14843811000132432781
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_11_47_44-14843811000132432781
    Sep 30, 2021 6:47:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T18:47:50.444Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:47:59.801Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.631Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.678Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.711Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.775Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.823Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.856Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:01.271Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:01.351Z: Starting 5 workers in us-central1-c...
    Sep 30, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:28.732Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 6:48:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:37.174Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:48:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:37.205Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 30, 2021 6:48:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:47.443Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:11.591Z: Workers have started successfully.
    Sep 30, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:11.619Z: Workers have started successfully.
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.139Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.266Z: Cleaning up.
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.345Z: Stopping worker pool...
    Sep 30, 2021 6:52:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:52:03.674Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 6:52:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:52:03.851Z: Worker pool stopped.
    Sep 30, 2021 6:52:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_11_47_44-14843811000132432781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e7dd9959-3546-470f-b447-57a5bb72fad4 and timestamp: 2021-09-30T18:52:12.078000000Z:
                     Metric:                    Value:
                   read_time                      9.98
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:52:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.058 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.075 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 30.206 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mny4r4enhidhy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2488

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2488/display/redirect>

Changes:


------------------------------------------
[...truncated 339.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test888665778287048086.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CFgx9LkNDXWIzF4L_Q8WWHAUnGvAvMK_daSGkpVuM8k.jar
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b4bf1acc6157a06cd79cecbba5e2af6c3f56cc71eb97ad21cbe6128e77c69dfb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tL8azGFXoGzXnOy7peKvbD9WzHHrl60hy-YSjnfGnfs.pb
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_05_45_04-8137050605073411640?project=apache-beam-testing
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_05_45_04-8137050605073411640
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_05_45_04-8137050605073411640
    Sep 30, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T12:45:07.611Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:13.604Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.437Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.478Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.509Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.587Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.613Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.646Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.974Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:15.050Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:19.793Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:01.997Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:30.848Z: Workers have started successfully.
    Sep 30, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:30.871Z: Workers have started successfully.
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:02.871Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:02.997Z: Cleaning up.
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:03.090Z: Stopping worker pool...
    Sep 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:49:35.992Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:49:36.074Z: Worker pool stopped.
    Sep 30, 2021 12:49:43 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_05_45_04-8137050605073411640 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 972d64f7-d3b2-4841-8f57-1bb2826145c6 and timestamp: 2021-09-30T12:49:43.636000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.914

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 56.406 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/eaexy6eq2bbxy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2487

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2487/display/redirect>

Changes:


------------------------------------------
[...truncated 337.79 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2790166041164843000.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mTB5mjFKQujtR6LcWttiPQ9brPwnAPx19bogbCgUijI.jar
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash 8b044eaaacaa7e7f8aac2783740cb80a1c2e999f4b8ff427738710ecfb472750> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-iwROqqyqfn-KrCeDdAy4ChwumZ9Lj_Qnc4cQ7PtHJ1A.pb
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_23_45_04-480439106660977662?project=apache-beam-testing
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_23_45_04-480439106660977662
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_23_45_04-480439106660977662
    Sep 30, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T06:45:08.114Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:13.883Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.778Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.820Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.844Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.893Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.911Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.932Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:15.268Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:15.337Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:36.225Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:00.035Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:26.000Z: Workers have started successfully.
    Sep 30, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:26.033Z: Workers have started successfully.
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.177Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.319Z: Cleaning up.
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.400Z: Stopping worker pool...
    Sep 30, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:49:23.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:49:23.783Z: Worker pool stopped.
    Sep 30, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_23_45_04-480439106660977662 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cacf32ed-da08-4bcc-a773-ddf9035f0ba3 and timestamp: 2021-09-30T06:49:30.851000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.082

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 42.826 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xelkpgsh6jara

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2486

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2486/display/redirect?page=changes>

Changes:

[kileysok] Update windmill state map to resolve *IfAbsent methods immediately

[piotr.szczepanik] [BEAM-12356] Fixed last non-cached usage of DatasetService in BigQuery

[noreply] [BEAM-12977] Translates Reshuffle in Portable Mode with Samza native

[noreply] [BEAM-12982] Help users debug which JvmInitializer is running and when.


------------------------------------------
[...truncated 344.50 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 12:46:03 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 12:46:04 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 12:46:05 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 12:46:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1413020570]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@210017548]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test573253334719175251.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lVVrtBnXYSgG1a2yKE-olbo9sFY3aO8mu-UE6wX2Qas.jar
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2d97c0ed6cfde58fd8348c7ced6693d03bd8fa5d1dde615f54f569d976923356> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LZfA7Wz95Y_YNIx87WaT0DvY-l0d3mFfVPVp2XaSM1Y.pb
    Sep 30, 2021 12:46:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 12:46:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_17_46_17-1488063317250134391?project=apache-beam-testing
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_17_46_17-1488063317250134391
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_17_46_17-1488063317250134391
    Sep 30, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T00:46:22.320Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:27.775Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.466Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.515Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.560Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.637Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.666Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.712Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:29.230Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:29.305Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:42.628Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:20.081Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:45.242Z: Workers have started successfully.
    Sep 30, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:45.269Z: Workers have started successfully.
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.697Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.897Z: Cleaning up.
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.985Z: Stopping worker pool...
    Sep 30, 2021 12:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:50:36.616Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 12:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:50:36.663Z: Worker pool stopped.
    Sep 30, 2021 12:50:41 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_17_46_17-1488063317250134391 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c877a94b-b4d5-42f1-aa89-e2bff151cd7b and timestamp: 2021-09-30T00:50:41.950000000Z:
                     Metric:                    Value:
                   read_time                     9.085
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.08 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 24s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/3fjnuyutmrfsg

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2485

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2485/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-12942] Validate pubsub messages before they are published in

[clairem] [BEAM-12628] make useReflectApi default to true

[noreply] [BEAM-12856] Change hard-coded limits for reading from a UnboundedReader


------------------------------------------
[...truncated 359.71 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 572a460eb19ebb7d6397f0db1a677cfc
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 11'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 11'
Successfully started process 'Gradle Test Executor 11'

Gradle Test Executor 11 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 6:58:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 6:58:16 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 6:58:17 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 6:58:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:58:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:58:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:58:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:58:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:58:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:58:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@769449421]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1710747748]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:59:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 6:59:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 6:59:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-JPP4cyCHQxn010diAGAefPjsl8-1s3Nta1xm7CweWOs.jar
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2195400849734178326.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-E1qygH0ujm0NqN53mzUn7Qw1X8k6RhcpwSlfU35zayc.jar
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash a7f8094b559bc23ce8d5449d7564caa2e829b036fd58bdecd261207045f02f70> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p_gJS1Wbwjzo1USddWTKougpsDb9WL3s0mEgcEXwL3A.pb
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_11_59_51-1748228159735481124?project=apache-beam-testing
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_11_59_51-1748228159735481124
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_11_59_51-1748228159735481124
    Sep 29, 2021 6:59:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T18:59:55.000Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 7:01:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:28.347Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:31.107Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:31.933Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.070Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.134Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.275Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.333Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.400Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:33.863Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:34.065Z: Starting 5 workers in us-central1-c...
    Sep 29, 2021 7:02:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:08.491Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 7:02:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:08.561Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 7:02:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:18.897Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 7:02:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:58.535Z: Workers have started successfully.
    Sep 29, 2021 7:02:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:58.591Z: Workers have started successfully.
    Sep 29, 2021 7:03:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:30.747Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 7:03:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:31.457Z: Cleaning up.
    Sep 29, 2021 7:03:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:32.659Z: Stopping worker pool...
    Sep 29, 2021 7:05:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:05:46.159Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 7:05:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:05:46.340Z: Worker pool stopped.
    Sep 29, 2021 7:06:45 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_11_59_51-1748228159735481124 finished with status DONE.


Gradle Test Executor 11 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 61fc3acd-7f0a-4563-a089-8fbdfdf37816 and timestamp: 2021-09-29T19:06:45.080000000Z:
                     Metric:                    Value:
                   read_time                     7.666
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 7:06:45 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.011 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 8 mins 47.008 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 21m 23s
152 actionable tasks: 109 executed, 43 from cache

Publishing build scan...
https://gradle.com/s/7d7qoff3k77au

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2484

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2484/display/redirect>

Changes:


------------------------------------------
[...truncated 337.20 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 12:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test657470287928516511.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-HWEZBcNCPETt_MjilxZjiWsuusvE0FuEl60l020V96U.jar
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 7bd744d3e6cebdec48e398023fe98c13ec19c56050b1522239d987dbffb1a4b9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-e9dE0-bOvexI45gCP-mME-wZxWBQsVIiOdmH2_-xpLk.pb
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_05_45_12-4445359840033513456?project=apache-beam-testing
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_05_45_12-4445359840033513456
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_05_45_12-4445359840033513456
    Sep 29, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T12:45:19.145Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:26.872Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.767Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.806Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.833Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.907Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.927Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.961Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:28.299Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:28.377Z: Starting 5 workers in us-central1-a...
    Sep 29, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:33.133Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:15.299Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:41.320Z: Workers have started successfully.
    Sep 29, 2021 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:41.353Z: Workers have started successfully.
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.496Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.620Z: Cleaning up.
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.706Z: Stopping worker pool...
    Sep 29, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:50:05.668Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:50:05.733Z: Worker pool stopped.
    Sep 29, 2021 12:50:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_05_45_12-4445359840033513456 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a473a4e6-b787-4486-bdbc-c44588f3e576 and timestamp: 2021-09-29T12:50:16.855000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.501

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:50:17 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.189 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 22.696 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/nclbmw3pccass

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2483

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2483/display/redirect>

Changes:


------------------------------------------
[...truncated 345.98 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 6:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4767608592662490288.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SPSuFqghUI4aME98kWQbAu6dIX6hLa538WUDFVieR24.jar
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f06cde5785855ef35d79f1cb8dd910372587f22dd2ff2260569ab8751217abbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8GzeV4WFXvNdefHLjdkQNyWH8i3S_yJgVpq4dRIXq7w.pb
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_23_45_28-6780201276708781993?project=apache-beam-testing
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_23_45_28-6780201276708781993
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_23_45_28-6780201276708781993
    Sep 29, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T06:45:31.977Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:40.972Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.663Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.703Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.730Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.807Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.838Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.867Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:42.247Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:42.316Z: Starting 5 workers in us-central1-c...
    Sep 29, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:50.742Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:17.907Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:17.945Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:28.179Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:55.358Z: Workers have started successfully.
    Sep 29, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:55.407Z: Workers have started successfully.
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.008Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.193Z: Cleaning up.
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.288Z: Stopping worker pool...
    Sep 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:49:48.018Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:49:48.079Z: Worker pool stopped.
    Sep 29, 2021 6:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_23_45_28-6780201276708781993 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 65a611b5-7c37-478c-b9ac-7cb85651a9ac and timestamp: 2021-09-29T06:49:54.463000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.573

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 43.423 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/y4mbdkxl577uk

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2482

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2482/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11129] Add namespace and key to portable display data (#15564)

[noreply] [BEAM-12906] Add a `dataframe` extra for installing a pandas version


------------------------------------------
[...truncated 366.26 KB...]
Gradle Test Executor 7 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 7'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 7'
Successfully started process 'Gradle Test Executor 7'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 12:49:36 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 12:49:37 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 12:49:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 12:49:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 12:49:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7083351408459195147.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aJjNwohAk2chZZrWcDhxFA3fmi0rcx5ZcQijQ58Wfsw.jar
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash c4c25d34dc802fe4e8a921f61e04eea39bf8537c0aa431f9465055ae43f49e96> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xMJdNNyAL-ToqSH2HgTuo5v4U3wKpDH5RlBVrkP0npY.pb
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_17_49_52-11142279470977090787?project=apache-beam-testing
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_17_49_52-11142279470977090787
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_17_49_52-11142279470977090787
    Sep 29, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T00:49:55.374Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 12:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.028Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.916Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.966Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.115Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.165Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.202Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.641Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.764Z: Starting 5 workers in us-central1-a...
    Sep 29, 2021 12:50:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:18.178Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 12:50:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:39.435Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:50:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:39.469Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 12:50:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:49.832Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:14.831Z: Workers have started successfully.
    Sep 29, 2021 12:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:14.871Z: Workers have started successfully.
    Sep 29, 2021 12:51:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.695Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:51:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.827Z: Cleaning up.
    Sep 29, 2021 12:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.894Z: Stopping worker pool...
    Sep 29, 2021 12:54:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:54:10.501Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 12:54:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:54:10.549Z: Worker pool stopped.
    Sep 29, 2021 12:54:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_17_49_52-11142279470977090787 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 927253f6-e483-4a48-876a-0686275b0456 and timestamp: 2021-09-29T00:54:16.280000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.284

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:54:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 7 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.042 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.052 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 46.545 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 52s
152 actionable tasks: 115 executed, 37 from cache

Publishing build scan...
https://gradle.com/s/gsmiwhly6qg4o

Stopped 6 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2481

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2481/display/redirect?page=changes>

Changes:

[chamikaramj] Sets the correct coder in 'ConstantTableDestinations' when using BQ

[Luke Cwik] [BEAM-12974] Migrate off of deprecated package for MockitoJUnitRunner

[noreply] [BEAM-12593] Verify DataFrame API on pandas 1.3 (with container update)

[noreply] [BEAM-12973] Print Go Test and Script info. (#15604)

[noreply] [BEAM-10913] - Installing and persisting Grafana plugin in kubernetes


------------------------------------------
[...truncated 341.27 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 957a10a4d509128460557542c6f8bab1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 6:52:32 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 6:52:33 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 6:52:33 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 6:52:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 6:52:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test489072803952744924.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-uQraVB0IQDQ2YOEU5iWVuptBhQeg6Zsso9-RMi6dsZM.jar
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 228eacc7913c31784768f960c3e984e786c6ca5e5acdfaa1f0ea31ce84cc06eb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Io6sx5E8MXhHaPlgw-mE54bGyl5azfqh8OoxzoTMBus.pb
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_11_52_46-17894663093463120524?project=apache-beam-testing
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_11_52_46-17894663093463120524
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_11_52_46-17894663093463120524
    Sep 28, 2021 6:52:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T18:52:50.593Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:58.642Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.411Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.547Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.595Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.752Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.832Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.892Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:00.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:00.777Z: Starting 5 workers in us-central1-c...
    Sep 28, 2021 6:53:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:14.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 6:53:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:33.812Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:53:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:33.843Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 28, 2021 6:53:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:44.087Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:54:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:10.136Z: Workers have started successfully.
    Sep 28, 2021 6:54:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:10.195Z: Workers have started successfully.
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:49.884Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:50.184Z: Cleaning up.
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:50.437Z: Stopping worker pool...
    Sep 28, 2021 6:57:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:57:09.461Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 6:57:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:57:09.527Z: Worker pool stopped.
    Sep 28, 2021 6:57:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_11_52_46-17894663093463120524 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2161815d-114f-4e1c-8fa6-f57c56f413fd and timestamp: 2021-09-28T18:57:15.777000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    17.656

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:57:16 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 47.341 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 36s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/lrmzmpon2amzc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2480

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2480/display/redirect>

Changes:


------------------------------------------
[...truncated 339.67 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6972548575975067231.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6qcaWwe3GKP3EJBBpRszHeC07YU6vdlHz0z8wMhayPc.jar
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash e037fa70c507fa1cc6c773c71bec1353a5da935c79caefa00b24b7e6d8155b55> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4Df6cMUH-hzGx3PHG-wTU6Xak1x5yu-gCyS35tgVW1U.pb
    Sep 28, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_05_45_10-332958398704530790?project=apache-beam-testing
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_05_45_10-332958398704530790
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_05_45_10-332958398704530790
    Sep 28, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T12:45:13.593Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:22.446Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.184Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.212Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.248Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.336Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.359Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.383Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.681Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.752Z: Starting 5 workers in us-central1-c...
    Sep 28, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:56.297Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:58.106Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:58.151Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 28, 2021 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:08.368Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:31.538Z: Workers have started successfully.
    Sep 28, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:31.574Z: Workers have started successfully.
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.136Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.300Z: Cleaning up.
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.394Z: Stopping worker pool...
    Sep 28, 2021 12:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:49:31.463Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 12:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:49:31.522Z: Worker pool stopped.
    Sep 28, 2021 12:49:40 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_05_45_10-332958398704530790 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 07445f19-266b-4e41-bda7-1f1d29bf5cc3 and timestamp: 2021-09-28T12:49:41.001000000Z:
                     Metric:                    Value:
                   read_time                    17.122
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:49:41 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 50.158 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dni5fye7dagum

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2479

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2479/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12611] Capture a greater proporition of logs associated to an


------------------------------------------
[...truncated 336.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8826700596754350547.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YGzE0tq0cHi5e33soYf-VTEG1yaLQCfKEhpRJW5tNqs.jar
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 3ecb1a7f956a4f472e94d873bf27191d68a0e2769c96f1a74f3f42797f81c4b0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Pssaf5VqT0culNhzvycZHWig4naclvGnTz9CeX-BxLA.pb
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_23_45_10-16000690830406534922?project=apache-beam-testing
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_23_45_10-16000690830406534922
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_23_45_10-16000690830406534922
    Sep 28, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T06:45:14.080Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:19.547Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.204Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.262Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.295Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.357Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.385Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.419Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.778Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.859Z: Starting 5 workers in us-central1-a...
    Sep 28, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:48.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:06.583Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:33.200Z: Workers have started successfully.
    Sep 28, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:33.233Z: Workers have started successfully.
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.099Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.284Z: Cleaning up.
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.356Z: Stopping worker pool...
    Sep 28, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:49:26.083Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:49:26.165Z: Worker pool stopped.
    Sep 28, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_23_45_10-16000690830406534922 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 73e6f619-a8a0-44a6-bc65-affb1c64ee5b and timestamp: 2021-09-28T06:49:31.629000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.836

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 39.841 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mwtdigdmrz2bm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2478

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2478/display/redirect?page=changes>

Changes:

[Andrew Pilloud] [BEAM-12691] FieldAccessDescriptor for BeamCalcRel

[aydar.zaynutdinov] Init basic go backend structure

[aydar.zaynutdinov] Add .gitkeep file as an exclusion

[yileiyang] Remove encoding= from the json.loads call.

[noreply] [BEAM-12798] Add configurable combiner packing limit (#15391)

[noreply] [BEAM-12926] Translates Reshuffle with Samza native repartition operator

[noreply] [BEAM-11007] Allow nbconvert 6.x (#15595)

[noreply] [BEAM-12945] Fix crashes when importing the DataFrame API with pandas


------------------------------------------
[...truncated 340.88 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 12:46:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 12:46:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 12:46:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 12:46:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 12:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-__o_cT2WEL8WknZtwTW8aQevksYt5UgSYWmasm174nQ.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3858642203605244807.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LwrHOCGbgC2ue4bcblknD22fEXbokBbZBKLgLWHT-m4.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-ZgoREHN3SqOQclIGldjYWlJZYlnicfJ-w-LJBzxpP7U.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 56ec63180b31a16f47e66aca7425904f421affcf7986463dd5879a4f338982cc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VuxjGAsxoW9H5mrKdCWQT0Ia_895hkY91YeaTzOJgsw.pb
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_17_46_21-4927795710480121675?project=apache-beam-testing
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_17_46_21-4927795710480121675
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_17_46_21-4927795710480121675
    Sep 28, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T00:46:24.808Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:31.991Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.740Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.790Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.900Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.937Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.971Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:33.340Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:33.417Z: Starting 5 workers in us-central1-a...
    Sep 28, 2021 12:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:02.860Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:25.907Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:47:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:50.298Z: Workers have started successfully.
    Sep 28, 2021 12:47:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:50.326Z: Workers have started successfully.
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:21.799Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:21.983Z: Cleaning up.
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:22.120Z: Stopping worker pool...
    Sep 28, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:50:41.337Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:50:41.377Z: Worker pool stopped.
    Sep 28, 2021 12:50:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_17_46_21-4927795710480121675 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 76715fd9-b7c5-4c3e-86cb-1bbaf407eead and timestamp: 2021-09-28T00:50:48.165000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.473

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:50:48 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 31s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/oexuvqidchaq4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2477

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2477/display/redirect?page=changes>

Changes:

[noreply] Update container tags used by unreleased SDKs.


------------------------------------------
[...truncated 338.48 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8936137984933033682.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KvPdtXhCZeNI55k6veRk58SDRj4ngDPkdSqPZi2Vx4s.jar
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 123b1db6c7af88c71d94372c202a15e7a05d02060ba54f5477693084babb7769> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EjsdtseviMcdlDcsICoV56BdAgYLpU9Ud2kwhLq7d2k.pb
    Sep 27, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_11_45_12-10521807972155577567?project=apache-beam-testing
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_11_45_12-10521807972155577567
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_11_45_12-10521807972155577567
    Sep 27, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T18:45:15.623Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:24.780Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.535Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.572Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.597Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.671Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.709Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.743Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:26.200Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:26.275Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:32.104Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:00.948Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:00.984Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 27, 2021 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:11.216Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:36.254Z: Workers have started successfully.
    Sep 27, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:36.286Z: Workers have started successfully.
    Sep 27, 2021 6:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.301Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.445Z: Cleaning up.
    Sep 27, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.590Z: Stopping worker pool...
    Sep 27, 2021 6:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:49:33.041Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 6:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:49:33.092Z: Worker pool stopped.
    Sep 27, 2021 6:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_11_45_12-10521807972155577567 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dda3b7d8-2d15-47c7-9de5-38cd1c09abee and timestamp: 2021-09-27T18:49:42.444000000Z:
                     Metric:                    Value:
                   read_time                    14.444
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 49.112 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/7k7nv6g4i2enk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2476

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2476/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1698953046670420008.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MW8u-3tPmTyH5yoyJtew4u-wYz3XrKQaqwvhvgeXgkE.jar
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 2 seconds
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 89dd6f105e7b3b8391359e6ae9e4933451c16ae68aa39ed85bc6c82e400c2d37> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-id1vEF57O4ORNZ5q6eSTNFHBauaKo57YW8bILkAMLTc.pb
    Sep 27, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_05_45_11-16125404393212011150?project=apache-beam-testing
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_05_45_11-16125404393212011150
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_05_45_11-16125404393212011150
    Sep 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T12:45:14.510Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:21.432Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.218Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.269Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.303Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.364Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.395Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.423Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.810Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.876Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:34.089Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:59.410Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:59.455Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 27, 2021 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:09.735Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:32.976Z: Workers have started successfully.
    Sep 27, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:33.009Z: Workers have started successfully.
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.538Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.697Z: Cleaning up.
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.777Z: Stopping worker pool...
    Sep 27, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:49:27.387Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:49:27.436Z: Worker pool stopped.
    Sep 27, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_05_45_11-16125404393212011150 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 867e641a-db97-4a36-a5de-ea3285f71f93 and timestamp: 2021-09-27T12:49:34.956000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     10.04

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 43.378 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/p7bm6iey2pxim

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2475

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2475/display/redirect>

Changes:


------------------------------------------
[...truncated 339.41 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 6:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test606067313956805319.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XVMyk8-627vyBINKnF0d9O_TbZjHcRUaLr5SgEpkflE.jar
    Sep 27, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 27, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash a9bb741af712aaca2d27e2bff84804dd533a5921550fec01bb003874fa05bf5d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qbt0GvcSqsotJ-K_-EgE3VM6WSFVD-wBuwA4dPoFv10.pb
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_23_45_19-9658409184207848229?project=apache-beam-testing
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_23_45_19-9658409184207848229
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_23_45_19-9658409184207848229
    Sep 27, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T06:45:22.389Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:34.172Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.036Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.075Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.102Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.174Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.200Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.233Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.529Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.586Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:43.170Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:21.095Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:47.019Z: Workers have started successfully.
    Sep 27, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:47.055Z: Workers have started successfully.
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.274Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.408Z: Cleaning up.
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.484Z: Stopping worker pool...
    Sep 27, 2021 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:49:39.507Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:49:39.537Z: Worker pool stopped.
    Sep 27, 2021 6:49:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_23_45_19-9658409184207848229 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3abfee88-11a2-484c-ba29-cddc2e21f9e2 and timestamp: 2021-09-27T06:49:45.464000000Z:
                     Metric:                    Value:
                   read_time                      8.58
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 54.179 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cqexpwyazocci

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2474

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2474/display/redirect>

Changes:


------------------------------------------
[...truncated 337.45 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 12:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5895972536111496274.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-61IUw3jGh0-VUqGy5GTi1tycqW1iq3FKeI23tM-4hkY.jar
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash f0711782d78e76ed34f7fa55c40211b2985f0b8145a086e1f24abe7d8dfe5834> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8HEXgteOdu009_pVxAIRsphfC4FFoIbh8kq-fY3-WDQ.pb
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_17_45_10-9883298268261907232?project=apache-beam-testing
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_17_45_10-9883298268261907232
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_17_45_10-9883298268261907232
    Sep 27, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T00:45:14.085Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:20.279Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:20.996Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.058Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.088Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.162Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.188Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.215Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.614Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.693Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 12:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:51.156Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:01.729Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:30.803Z: Workers have started successfully.
    Sep 27, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:30.823Z: Workers have started successfully.
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.290Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.426Z: Cleaning up.
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.526Z: Stopping worker pool...
    Sep 27, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:49:13.308Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:49:13.346Z: Worker pool stopped.
    Sep 27, 2021 12:49:19 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_17_45_10-9883298268261907232 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 78774a8c-e473-4256-a112-56fad869f6c0 and timestamp: 2021-09-27T00:49:19.665000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.946

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:49:20 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 28.152 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yt3emrpzmp5cc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2473

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2473/display/redirect>

Changes:


------------------------------------------
[...truncated 339.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 6:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5132010080650681199.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cqTi9NX-l9vIjrtS_yl5wS3CrksSOoMGbycT8bqs4l4.jar
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 5e3dbc930ebc167e27f19d0d5ba381333c2fb8916345d839a2d556ca42f72583> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Xj28kw68Fn4n8Z0NW6OBMzwvuJFjRdg5otVWykL3JYM.pb
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_11_45_14-11136377410489573887?project=apache-beam-testing
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_11_45_14-11136377410489573887
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_11_45_14-11136377410489573887
    Sep 26, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T18:45:17.364Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.292Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.874Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.905Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.933Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.005Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.030Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.052Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.332Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.396Z: Starting 5 workers in us-central1-a...
    Sep 26, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:31.122Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:07.181Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:33.881Z: Workers have started successfully.
    Sep 26, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:33.914Z: Workers have started successfully.
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.208Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.355Z: Cleaning up.
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.422Z: Stopping worker pool...
    Sep 26, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:49:22.658Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:49:22.703Z: Worker pool stopped.
    Sep 26, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_11_45_14-11136377410489573887 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8eea85c1-1a26-469a-a8ac-1bddc55bf2a5 and timestamp: 2021-09-26T18:49:28.285000000Z:
                     Metric:                    Value:
                   read_time                     8.894
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 33.013 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b44yzy35q7g6o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2472

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2472/display/redirect>

Changes:


------------------------------------------
[...truncated 337.98 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1254604070143763192.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9-kMvil9kuVjtdOuQKHYscGOFYbYN5eesmOu_omgvz8.jar
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash af62ebf8274598151ff04eec2336c5b03bf73f2e28da7d1c241753ab8fef334b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-r2Lr-CdFmBUf8E7sIzbFsDv3Py4o2n0cJBdTq4_vM0s.pb
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_05_45_14-9479678245125380563?project=apache-beam-testing
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_05_45_14-9479678245125380563
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_05_45_14-9479678245125380563
    Sep 26, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T12:45:17.295Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:24.383Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.143Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.193Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.211Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.271Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.291Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.322Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.669Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.745Z: Starting 5 workers in us-central1-c...
    Sep 26, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:49.475Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:59.787Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:59.809Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 26, 2021 12:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:10.027Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:36.856Z: Workers have started successfully.
    Sep 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:36.898Z: Workers have started successfully.
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.020Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.176Z: Cleaning up.
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.257Z: Stopping worker pool...
    Sep 26, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:49:26.413Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:49:26.477Z: Worker pool stopped.
    Sep 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_05_45_14-9479678245125380563 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a8b4c07a-fbda-4b8b-ac9f-7ec72f2fd5a1 and timestamp: 2021-09-26T12:49:31.921000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.881

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.88 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/misvg7v45ysku

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2471

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2471/display/redirect>

Changes:


------------------------------------------
[...truncated 338.44 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1235604829244032045.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XqowqaS49m2HdUlnmpXrWGy6UWTNhVhLxBOJFdOJcpw.jar
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 045abca4714f0fae801463ce12fdfc75f5a037f7c51a41734820e0019a4ec701> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-BFq8pHFPD66AFGPOEv38dfWgN_fFGkFzSCDgAZpOxwE.pb
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_23_45_10-12170458182251827948?project=apache-beam-testing
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_23_45_10-12170458182251827948
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_23_45_10-12170458182251827948
    Sep 26, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T06:45:14.041Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:20.353Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.074Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.113Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.150Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.221Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.248Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.285Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.628Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.718Z: Starting 5 workers in us-central1-c...
    Sep 26, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:47.609Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:55.286Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:55.319Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 26, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:05.542Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:30.039Z: Workers have started successfully.
    Sep 26, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:30.079Z: Workers have started successfully.
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.101Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.263Z: Cleaning up.
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.367Z: Stopping worker pool...
    Sep 26, 2021 6:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:49:14.151Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 6:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:49:14.200Z: Worker pool stopped.
    Sep 26, 2021 6:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_23_45_10-12170458182251827948 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4558b16c-a64e-48ab-88ca-dceffb0d9a04 and timestamp: 2021-09-26T06:49:21.114000000Z:
                     Metric:                    Value:
                   read_time                     9.072
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 28.379 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mf35lmvuobpb6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2470

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2470/display/redirect>

Changes:


------------------------------------------
[...truncated 337.75 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1834057475545864917.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yWIMTLhmjPJLlhf9R_2BhWwV8Bx-Bjc22jrTxmwJgo8.jar
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 844be46f876ef98404bf3f0c87ed56fedf368f27be27326fa318dfc790479a0f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hEvkb4du-YQEvz8Mh-1W_t82jye-JzJvoxjfx5BHmg8.pb
    Sep 26, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_17_45_06-6421286648442750099?project=apache-beam-testing
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_17_45_06-6421286648442750099
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_17_45_06-6421286648442750099
    Sep 26, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T00:45:09.573Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.083Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.818Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.847Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.878Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.948Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.975Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.008Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.534Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.607Z: Starting 5 workers in us-central1-a...
    Sep 26, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:26.661Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 12:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:03.561Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:30.032Z: Workers have started successfully.
    Sep 26, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:30.088Z: Workers have started successfully.
    Sep 26, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:58.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:59.050Z: Cleaning up.
    Sep 26, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:59.152Z: Stopping worker pool...
    Sep 26, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:49:20.479Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:49:20.526Z: Worker pool stopped.
    Sep 26, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_17_45_06-6421286648442750099 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9d1ce8f9-c69b-49ac-8fd8-008a50fad271 and timestamp: 2021-09-26T00:49:26.375000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.087

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 37.561 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/72olwmfyztj2q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2469

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2469/display/redirect>

Changes:


------------------------------------------
[...truncated 337.75 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8238155918827975761.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TRur8FMsi0thdlBAFaqG7pqS8oYSIwXO8ZKbm25MraY.jar
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash ed7c385e0be0647a842284b68b592f1461157ec46f401fae8892fc0e12473cda> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7Xw4XgvgZHqEIoS2i1kvFGEVfsRvQB-uiJL8DhJHPNo.pb
    Sep 25, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_11_45_08-4975878268342362171?project=apache-beam-testing
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_11_45_08-4975878268342362171
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_11_45_08-4975878268342362171
    Sep 25, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T18:45:12.177Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:17.856Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.532Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.602Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.631Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.695Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.718Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.742Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:19.039Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:19.120Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:40.300Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:08.797Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:33.495Z: Workers have started successfully.
    Sep 25, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:33.521Z: Workers have started successfully.
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.030Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.176Z: Cleaning up.
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.261Z: Stopping worker pool...
    Sep 25, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:49:31.848Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:49:31.886Z: Worker pool stopped.
    Sep 25, 2021 6:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_11_45_08-4975878268342362171 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 15db7ab9-ee8a-4504-a8a5-d08aa9191a95 and timestamp: 2021-09-25T18:49:38.775000000Z:
                     Metric:                    Value:
                   read_time                     8.479
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 48.54 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/enghqxrp46tki

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2468

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2468/display/redirect>

Changes:


------------------------------------------
[...truncated 343.89 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 12:45:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8001638972657287600.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UvFX61zunvX7BuizZ3b4gxlqm0h6oZuwbc9Zdwvd_wQ.jar
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 4c7c3cdddebf371f1d2b96d022fcabda988b617a94333afe45fdf50d96192006> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-THw83d6_Nx8dK5bQIvyr2piLYXqUMzr-Rf31DZYZIAY.pb
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_05_45_36-15522118172883620923?project=apache-beam-testing
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_05_45_36-15522118172883620923
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_05_45_36-15522118172883620923
    Sep 25, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T12:45:39.660Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:44.766Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.482Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.545Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.573Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.637Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.662Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.693Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:46.020Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:46.085Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:58.575Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:31.503Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:56.523Z: Workers have started successfully.
    Sep 25, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:56.542Z: Workers have started successfully.
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.373Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.492Z: Cleaning up.
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.551Z: Stopping worker pool...
    Sep 25, 2021 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:49:45.782Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:49:45.825Z: Worker pool stopped.
    Sep 25, 2021 12:49:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_05_45_36-15522118172883620923 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4944706f-133e-4e6a-b59a-da64c29a8332 and timestamp: 2021-09-25T12:49:50.829000000Z:
                     Metric:                    Value:
                   read_time                      8.49
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:49:51 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 35.131 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/4euke2lgttdmw

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2467

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2467/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11982] Java Spanner - Implement IO Request Count metrics (#15493)


------------------------------------------
[...truncated 352.65 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 6:46:25 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 6:46:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 6:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 6:46:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2940449598606000992.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-v42ycm5qXd3fLwhQVOLmUpUDz8Xi7obUeUNvBkWeEqU.jar
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103999 bytes, hash bfd1edf3246b3d51bbbe1f571961bc8b8ebff57fd77c7c51f34e16250e8a6cac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-v9Ht8yRrPVG7vh9XGWG8i46_9X_XfHxR804WJQ6KbKw.pb
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_23_46_40-77168588993866869?project=apache-beam-testing
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_23_46_40-77168588993866869
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_23_46_40-77168588993866869
    Sep 25, 2021 6:46:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T06:46:44.084Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:49.438Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.180Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.215Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.244Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.299Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.322Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.352Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.653Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.718Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:56.641Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:25.066Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:25.096Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 25, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:35.472Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:35.496Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 25, 2021 6:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:45.830Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:01.677Z: Workers have started successfully.
    Sep 25, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:01.702Z: Workers have started successfully.
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.761Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.903Z: Cleaning up.
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.975Z: Stopping worker pool...
    Sep 25, 2021 6:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:50:53.573Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 6:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:50:53.617Z: Worker pool stopped.
    Sep 25, 2021 6:50:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_23_46_40-77168588993866869 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d7b997e1-445c-410c-972d-ea7683bbeed8 and timestamp: 2021-09-25T06:50:58.986000000Z:
                     Metric:                    Value:
                   read_time                    13.598
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:50:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.21 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.103 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 37s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/ott5u3qmewgqc

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2466

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2466/display/redirect?page=changes>

Changes:

[Robert Bradshaw] [BEAM-12934] Use environment capabilities to determine length prefixing.

[Brian Hulette] Add missing comma

[ryanthompson591] Update tensorflow to version 2.6.0

[zhoufek] [BEAM-9487] Fix incorrected Repeatedly.may_finish implementation

[noreply] Cleanup use of futures. (#15043)

[noreply] Go Lint fix for wordcount and metrics (#15580)

[Brian Hulette] Fix whitespace lint

[noreply] [BEAM-3304] Snippets for trigger in BPG (#15409)

[noreply] [BEAM-12832] Add Go SDK xlang info to programming guide. (#15447)

[noreply] [BEAM-11097] Add SideInputCache to StateReader (#15563)

[Robert Bradshaw] Revert "Avoid apiary submission of job graph when it is not needed.

[noreply] [BEAM-12769] Java-emulating external transform. (#15546)

[noreply] [BEAM-12913] Pass query priority from ReadAllFromBigQuery (#15584)


------------------------------------------
[...truncated 338.28 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 8 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 8'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 8'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 12:44:38 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 12:44:39 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 12:44:40 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 12:44:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7309957586490269237.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_WnyG8sLTgAp6Br9WwdaQ1vDWck5goL-RRfSg374y_Y.jar
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 7 seconds
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e856d800349e70babc4d947a539bc6e2d7bd2517520d0092fdd59d3481f2ec48> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6FbYADSecLq8TZR6U5vG4te9JRdSDQCS_dWdNIHy7Eg.pb
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_17_45_00-410929181615742579?project=apache-beam-testing
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_17_45_00-410929181615742579
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_17_45_00-410929181615742579
    Sep 25, 2021 12:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T00:45:03.579Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:34.207Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:35.577Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.306Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.355Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.390Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.488Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.514Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.550Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.936Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:37.010Z: Starting 5 workers in us-central1-c...
    Sep 25, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:19.593Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 12:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:45.910Z: Workers have started successfully.
    Sep 25, 2021 12:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:45.945Z: Workers have started successfully.
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.633Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.793Z: Cleaning up.
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.856Z: Stopping worker pool...
    Sep 25, 2021 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:49:29.735Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:49:29.777Z: Worker pool stopped.
    Sep 25, 2021 12:49:37 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_17_45_00-410929181615742579 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cbe5e528-9c14-41a3-8dc1-48566506957a and timestamp: 2021-09-25T00:49:37.265000000Z:
                     Metric:                    Value:
                   read_time                     9.996
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:49:37 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 2.928 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/d24zdbrmadpau

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2465

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2465/display/redirect>

Changes:


------------------------------------------
[...truncated 338.62 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 6:44:42 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 6:44:42 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 6:44:43 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 6:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1284230225333265737.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1o2h1Ljn-MnvLwScjFCA2jm-rVn_-0DVX_ewaRdj_-w.jar
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 6:44:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 7eb4f8696f3cb768b6dfb6962d89952410950cb4206f3c7ed25e1d11dd0417f2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-frT4aW88t2i237aWLYmVJBCVDLQgbzx-0l4dEd0EF_I.pb
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_11_44_55-2459813221828693126?project=apache-beam-testing
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_11_44_55-2459813221828693126
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_11_44_55-2459813221828693126
    Sep 24, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T18:44:58.962Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:06.108Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:06.973Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.017Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.060Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.129Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.165Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.196Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.613Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.711Z: Starting 5 workers in us-central1-c...
    Sep 24, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:18.286Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:42.476Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:42.517Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 24, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:52.722Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:20.972Z: Workers have started successfully.
    Sep 24, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:21.013Z: Workers have started successfully.
    Sep 24, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.603Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.788Z: Cleaning up.
    Sep 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.867Z: Stopping worker pool...
    Sep 24, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:49:16.481Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:49:16.544Z: Worker pool stopped.
    Sep 24, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_11_44_55-2459813221828693126 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 28209f62-5c8d-4897-a31f-500057983f68 and timestamp: 2021-09-24T18:49:27.331000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.461

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 49.522 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/jomwvy3bl3cyi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2464

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2464/display/redirect>

Changes:


------------------------------------------
[...truncated 337.24 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8714511773009991827.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6wFErkUUNZ4s4RVGNRiPabbe0XpmMP2R2HDyrdydQ2g.jar
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 4a59f11164f54caf0e7ddadfad8ed5a00395cc37863c94d8f4d06654dc0001fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SlnxEWT1TK8OfdrfrY7VoAOVzDeGPJTY9NBmVNwAAfs.pb
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_05_45_07-6918030974945565860?project=apache-beam-testing
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_05_45_07-6918030974945565860
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_05_45_07-6918030974945565860
    Sep 24, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T12:45:13.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.112Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.924Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.959Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.990Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.049Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.078Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.110Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.447Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.521Z: Starting 5 workers in us-central1-a...
    Sep 24, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:45.706Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:04.128Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:30.352Z: Workers have started successfully.
    Sep 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:30.389Z: Workers have started successfully.
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.401Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.558Z: Cleaning up.
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.625Z: Stopping worker pool...
    Sep 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:49:22.960Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:49:23.011Z: Worker pool stopped.
    Sep 24, 2021 12:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_05_45_07-6918030974945565860 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 84b2f5f2-d035-4207-ac57-446d4eb65e0b and timestamp: 2021-09-24T12:49:29.063000000Z:
                     Metric:                    Value:
                   read_time                     9.428
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 39.572 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dbm4efp37xfj2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2463

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2463/display/redirect>

Changes:


------------------------------------------
[...truncated 341.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 6:45:22 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2515785665617398108.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-u1pRGU47V9-INDq0nZ2fj7xYX9wjxftvRxS29b0WMc8.jar
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash bd7906fb3c8587d81c9d0a6badaf4b6f7f301a59a2735bb43cb034bb7cf1baf3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vXkG-zyFh9gcnQprra9Lb38wGlmic1u0PLA0u3zxuvM.pb
    Sep 24, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_23_45_38-4999594876451740195?project=apache-beam-testing
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_23_45_38-4999594876451740195
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_23_45_38-4999594876451740195
    Sep 24, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T06:45:41.792Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:50.479Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.390Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.434Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.472Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.588Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.624Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.663Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:52.082Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:52.187Z: Starting 5 workers in us-central1-c...
    Sep 24, 2021 6:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:56.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:34.168Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:59.714Z: Workers have started successfully.
    Sep 24, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:59.740Z: Workers have started successfully.
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.615Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.823Z: Cleaning up.
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.918Z: Stopping worker pool...
    Sep 24, 2021 6:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:49:53.558Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 6:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:49:53.611Z: Worker pool stopped.
    Sep 24, 2021 6:49:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_23_45_38-4999594876451740195 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9c9b8c78-6057-483d-bc43-3e94b030d40a and timestamp: 2021-09-24T06:49:59.724000000Z:
                     Metric:                    Value:
                   read_time                     9.254
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:50:00 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 42.017 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/wbq5hydbz3fqi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2462

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2462/display/redirect?page=changes>

Changes:

[Brian Hulette] Add back 'more' break

[chuck.yang] Run BigQuery queries with batch priority

[chuck.yang] Allow changing query priority in ReadFromBigQuery

[chuck.yang] Update CHANGES

[alexander.chermenin] [BEAM-10822] Fixed typo in BigqueryClient

[zyichi] [BEAM-12898] Minor fix to Kubernetes.groovy postBuildScript.

[noreply] [BEAM-12869] Bump tensorflow from 2.5.0 to 2.5.1 in

[noreply] [BEAM-12946] Fix SmallestPerKey, add unit testing (#15567)

[zyichi] [BEAM-12908] Sickbay pubsublite.ReadWriteIT

[noreply] Relocate Go SDK breaking change note out of template into 2.33.0.


------------------------------------------
[...truncated 340.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker Thread 31,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 31,5,main]) started.
Gradle Test Executor 57 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 57'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 57'
Successfully started process 'Gradle Test Executor 57'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3067383912730159324.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RDIeCsZwHBmQy94ZoMBrlbsiVS69QscI-QVNIZ6ntMA.jar
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 18c0661a19c3a6b5bfe0fa3d91a384419be310a1070c2e0344467c82697f0056> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GMBmGhnDprW_4Po9kaOEQZvjEKEHDC4DREZ8gml_AFY.pb
    Sep 24, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_17_45_32-14184398797648112020?project=apache-beam-testing
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_17_45_32-14184398797648112020
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_17_45_32-14184398797648112020
    Sep 24, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T00:45:38.208Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:43.366Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.025Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.075Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.115Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.187Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.214Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.247Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.588Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.641Z: Starting 5 workers in us-central1-a...
    Sep 24, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:05.709Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:29.209Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:55.420Z: Workers have started successfully.
    Sep 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:55.456Z: Workers have started successfully.
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.505Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.647Z: Cleaning up.
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.721Z: Stopping worker pool...
    Sep 24, 2021 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:49:43.602Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:49:43.645Z: Worker pool stopped.
    Sep 24, 2021 12:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_17_45_32-14184398797648112020 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c9e709a3-21e4-4373-be6a-7d4e5ad078bd and timestamp: 2021-09-24T00:49:50.754000000Z:
                     Metric:                    Value:
                   read_time                       9.0
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 57 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 31,5,main]) completed. Took 4 mins 36.6 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 100 executed, 52 from cache

Publishing build scan...
https://gradle.com/s/qcqjg3bvildw2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2461

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2461/display/redirect?page=changes>

Changes:

[noreply] Avoid setting empty builders in proto setters

[Robert Bradshaw] Fix some website logos.

[kawaigin] Implicitly watch and track anonymous pipeline and PCollections

[chamikaramj] Reject requests when parameter names cannot be validated unless

[chamikaramj] Updates error message


------------------------------------------
[...truncated 352.53 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 6:47:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 6:47:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 6:47:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 6:47:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 6:47:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2936684944477571990.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CIXDPNEZW4V5N8CqNLzcgGSVUVSm7_dZ4ZyZ2qG_UU0.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-construction-java/build/libs/beam-runners-core-construction-java-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-construction-java-2.34.0-SNAPSHOT-vZyLxaaCl52ich-R-UQheFzpjD42oKouyyEoIx03QC4.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.34.0-SNAPSHOT-rHNNJt8fKjoZ7FjgFCzASOqHR3-JCxYQBb_7vqA7ZtE.jar
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 2 seconds
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 027ad43ceaa0e50b63f22db5cc5fa88a58b4726aebbc49592b8a841efc2d83ed> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AnrUPOqg5Qtj8i21zF-oili0cmrrvElZK4qEHvwtg-0.pb
    Sep 23, 2021 6:47:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_11_47_29-5175217799069690722?project=apache-beam-testing
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_11_47_29-5175217799069690722
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_11_47_29-5175217799069690722
    Sep 23, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T18:47:32.940Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:39.547Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.285Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.326Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.376Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.458Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.488Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.513Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.855Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.933Z: Starting 5 workers in us-central1-a...
    Sep 23, 2021 6:47:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:57.740Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:31.167Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:31.192Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 6:48:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:41.526Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:03.176Z: Workers have started successfully.
    Sep 23, 2021 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:03.228Z: Workers have started successfully.
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.385Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.526Z: Cleaning up.
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.621Z: Stopping worker pool...
    Sep 23, 2021 6:51:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:51:46.747Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 6:51:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:51:46.792Z: Worker pool stopped.
    Sep 23, 2021 6:51:51 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_11_47_29-5175217799069690722 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4d760c17-9dbb-4873-8634-904f58e0e733 and timestamp: 2021-09-23T18:51:52.056000000Z:
                     Metric:                    Value:
                   read_time                     8.396
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:51:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.048 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.053 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 52.801 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 19s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/2edlqbjwfog4y

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2460

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2460/display/redirect>

Changes:


------------------------------------------
[...truncated 341.47 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 12:45:02 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3945536414481175957.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-upxuCbr27peujoVKE7chutrtsZa5e-BThvm_dK1TBRU.jar
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-4bF8QpaxkWlkdPssX5OV0S9rIzFuRU0pojrN2fBwjjk.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 1 seconds
    Sep 23, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104000 bytes, hash af9781628e59b27097793bc43892cf6bf3bc88186592c229304c113d5f985145> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-r5eBYo5ZsnCXeTvEOJLPa_O8iBhlksIpMEwRPV-YUUU.pb
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_05_45_17-9181264945269481986?project=apache-beam-testing
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_05_45_17-9181264945269481986
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_05_45_17-9181264945269481986
    Sep 23, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T12:45:24.072Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.032Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.680Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.723Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.753Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.830Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.866Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.897Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:38.253Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:38.342Z: Starting 5 workers in us-central1-c...
    Sep 23, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:03.313Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:12.594Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:12.624Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:22.906Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:48.198Z: Workers have started successfully.
    Sep 23, 2021 12:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:48.235Z: Workers have started successfully.
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.094Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.247Z: Cleaning up.
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.335Z: Stopping worker pool...
    Sep 23, 2021 12:49:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:49:50.498Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 12:49:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:49:50.548Z: Worker pool stopped.
    Sep 23, 2021 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_05_45_17-9181264945269481986 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6df0a03d-f87a-439f-a8e9-2a9cc96fc940 and timestamp: 2021-09-23T12:49:58.438000000Z:
                     Metric:                    Value:
                   read_time                     8.838
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.067 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 1.666 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b6ar7voroa7cm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2459

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2459/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12383] Adding Go SDK and Kafka IO to Gradle cross-language test


------------------------------------------
[...truncated 340.97 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ef521024ff2c41cc9caf5e1ede2e99a5
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 6:45:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5910955340003637270.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-g4Rx_5rhi-UXfSpycV8e91lWcnG12qQ71zBea2Ze0mI.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b669c72af7daf9ece45c5f55b2c8232ff9275730054fa0808ad3cd9b2e0c2d90> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tmnHKvfa-ezkXF9VssgjL_knVzAFT6CAitPNmy4MLZA.pb
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_23_45_34-17970570427316720099?project=apache-beam-testing
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_23_45_34-17970570427316720099
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_23_45_34-17970570427316720099
    Sep 23, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T06:45:38.771Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:45.510Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.202Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.231Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.261Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.326Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.357Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.390Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.751Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.819Z: Starting 5 workers in us-central1-a...
    Sep 23, 2021 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:53.710Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:32.942Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:58.443Z: Workers have started successfully.
    Sep 23, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:58.480Z: Workers have started successfully.
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.001Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.125Z: Cleaning up.
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.190Z: Stopping worker pool...
    Sep 23, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:49:46.637Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:49:46.707Z: Worker pool stopped.
    Sep 23, 2021 6:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_23_45_34-17970570427316720099 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 37e52fc6-fcec-47eb-95ae-a6d4e0659e53 and timestamp: 2021-09-23T06:49:52.471000000Z:
                     Metric:                    Value:
                   read_time                     9.162
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 37.547 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ewao26yabe3tm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2458

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2458/display/redirect?page=changes>

Changes:

[kileysok] Add MapState and SetState support

[Kyle Weaver] [BEAM-12898] Use new postbuildscript dsl in Flink load tests

[Kyle Weaver] run postbuild regardless of test result

[Kyle Weaver] spotless

[noreply] Merge pull request #15487 from [BEAM-12812] - Run Github Actions on GCP

[noreply] [BEAM-11097] Add SideInputCache to harness control type (#15530)

[kawaigin] Updated interactive integration test golden screenshots.

[noreply] Revert PR 15487 (BEAM-12812) (#15554)

[zyichi] [BEAM-12898] Trying out solution suggestion for JENKINS-66189 to solve


------------------------------------------
[...truncated 340.82 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 52ebef38e18ea6b3c79e97ca10aec7b1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 12:46:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 12:47:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5184320067722493232.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-az2qEX0NxQwDhCTXI2k7ebhcmbbYnVpTPUkWAcHylm4.jar
    Sep 23, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103999 bytes, hash 11b94f5423a572a9f59f1157c15775ecabb636195aa71bd2dc2e72adbf45c43b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EblPVCOlcqn1nxFXwVd17Ku2NhlapxvS3C5yrb9FxDs.pb
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_17_47_11-3244588918368151576?project=apache-beam-testing
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_17_47_11-3244588918368151576
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_17_47_11-3244588918368151576
    Sep 23, 2021 12:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T00:47:15.522Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:23.824Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.603Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.642Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.672Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.767Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.806Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.845Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:25.323Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:25.411Z: Starting 5 workers in us-central1-c...
    Sep 23, 2021 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:48.955Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 12:48:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:01.559Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:48:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:01.614Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:11.900Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:38.117Z: Workers have started successfully.
    Sep 23, 2021 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:38.151Z: Workers have started successfully.
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.689Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.869Z: Cleaning up.
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.963Z: Stopping worker pool...
    Sep 23, 2021 12:51:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:51:37.848Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 12:51:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:51:37.925Z: Worker pool stopped.
    Sep 23, 2021 12:51:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_17_47_11-3244588918368151576 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0adb44ea-5b14-4349-8bae-16d579796dd5 and timestamp: 2021-09-23T00:51:45.159000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.401

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:51:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 53.002 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 50s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/gidlazahzrocc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2457

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2457/display/redirect>

Changes:


------------------------------------------
[...truncated 338.67 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1244217088328882518.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gaHEQFRtP_RhA-CnXy22XXTvOIgKB8QuwHJmhQY5QcM.jar
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b914cb50a0d09d4ec672e60bd0efed8060794a88f55dcb567218c7faff2d154b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uRTLUKDQnU7GcuYL0O_tgGB5Soj1XctWchjH-v8tFUs.pb
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_11_45_12-13103536790849586117?project=apache-beam-testing
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_11_45_12-13103536790849586117
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_11_45_12-13103536790849586117
    Sep 22, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T18:45:15.394Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.116Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.700Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.744Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.782Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.874Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.914Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.948Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:24.342Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:24.427Z: Starting 5 workers in us-central1-c...
    Sep 22, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:36.599Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:59.901Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:59.954Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 22, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:10.241Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:35.293Z: Workers have started successfully.
    Sep 22, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:35.318Z: Workers have started successfully.
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.293Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.461Z: Cleaning up.
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.542Z: Stopping worker pool...
    Sep 22, 2021 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:49:30.694Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:49:30.743Z: Worker pool stopped.
    Sep 22, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_11_45_12-13103536790849586117 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2a3475f2-b7be-478c-ab0d-6c180f4e11af and timestamp: 2021-09-22T18:49:37.336000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.199

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.348 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 45.159 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a4t4wz5trhy74

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2456

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2456/display/redirect?page=changes>

Changes:

[danthev] Fix 2.32.0 release notes.


------------------------------------------
[...truncated 337.21 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1080326950153223984.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-E0JAGKSgqH47A6_9SBeNMJe8OWoJ0gbHg_d6TMB2vVw.jar
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 73213c3a9e936855b3bb3a48c7e5adb4a07f9b322108c820f91b481ec04af87f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cyE8Op6TaFWzuzpIx-WttKB_mzIhCMgg-RtIHsBK-H8.pb
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_05_45_09-15371512024525983311?project=apache-beam-testing
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_05_45_09-15371512024525983311
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_05_45_09-15371512024525983311
    Sep 22, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T12:45:12.484Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:20.382Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.185Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.234Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.269Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.352Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.391Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.422Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.880Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.957Z: Starting 5 workers in us-central1-c...
    Sep 22, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:50.153Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:11.003Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:36.335Z: Workers have started successfully.
    Sep 22, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:36.366Z: Workers have started successfully.
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.601Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.759Z: Cleaning up.
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.848Z: Stopping worker pool...
    Sep 22, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:49:29.844Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:49:29.906Z: Worker pool stopped.
    Sep 22, 2021 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_05_45_09-15371512024525983311 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 36c0a812-054a-401b-bf6a-0b82689eea44 and timestamp: 2021-09-22T12:49:35.225000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.325

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 45.649 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uaklsmfn5dgek

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2455

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2455/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15537 from [BEAM-12908] Add a sleep to the IT after


------------------------------------------
[...truncated 337.68 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test171101041235627558.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NU1LvmrDW5YH4cBRSQX_zfD2xwsH9tyX1bR9_ahgIkI.jar
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e6eb149ebfe9b6e0adf5512916244aa35a3f7f59620b42cbcc6dd0a5874c9e02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5usUnr_ptuCt9VEpFiRKo1o_f1liC0LLzG3QpYdMngI.pb
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_23_45_08-831295087074972985?project=apache-beam-testing
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_23_45_08-831295087074972985
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_23_45_08-831295087074972985
    Sep 22, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T06:45:11.641Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:17.337Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.129Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.166Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.191Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.257Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.294Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.323Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.730Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.808Z: Starting 5 workers in us-central1-a...
    Sep 22, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:49.475Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:03.070Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:29.367Z: Workers have started successfully.
    Sep 22, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:29.427Z: Workers have started successfully.
    Sep 22, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.337Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.483Z: Cleaning up.
    Sep 22, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.566Z: Stopping worker pool...
    Sep 22, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:49:23.097Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:49:23.169Z: Worker pool stopped.
    Sep 22, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_23_45_08-831295087074972985 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7ee577c3-a708-4b2f-8b17-aca66c653bcc and timestamp: 2021-09-22T06:49:30.654000000Z:
                     Metric:                    Value:
                   read_time                     8.108
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 41.272 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/iwf5unyl6uklk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2454

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2454/display/redirect?page=changes>

Changes:

[kawaigin] [BEAM-10708] Added an example notebook for beam_sql magic

[noreply] Add a timeout for BQ streaming_insert RPCS (#15541)


------------------------------------------
[...truncated 337.32 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is af5aef652cb0ffd2e4d4d1a71b16ad28
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 12:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4274549280785273227.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-d9SpcQgcM0lltFLrjsbMZhYEmwb_zY2Ng-lF5yftMSc.jar
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 688b941c99458e9229a06f828b9731d51d518db03490136639a5a3539100c897> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aIuUHJlFjpIpoG-Ci5cx1R1RjbA0kBNmOaWjU5EAyJc.pb
    Sep 22, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_17_45_12-18409918559903482452?project=apache-beam-testing
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_17_45_12-18409918559903482452
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_17_45_12-18409918559903482452
    Sep 22, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T00:45:15.552Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:21.291Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.034Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.067Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.126Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.201Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.230Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.260Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.650Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.722Z: Starting 5 workers in us-central1-a...
    Sep 22, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:32.085Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:01.335Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:34.996Z: Workers have started successfully.
    Sep 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:35.020Z: Workers have started successfully.
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.530Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.660Z: Cleaning up.
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.742Z: Stopping worker pool...
    Sep 22, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:49:28.088Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:49:28.131Z: Worker pool stopped.
    Sep 22, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_17_45_12-18409918559903482452 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): daeb5ee0-018d-43be-97f3-60ac719aa1a7 and timestamp: 2021-09-22T00:49:35.590000000Z:
                     Metric:                    Value:
                   read_time                     8.915
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 41.029 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4grmsoriyf5so

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2453

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2453/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12919] Removed IBM Streams from runner matrix (#15542)

[noreply] [BEAM-12258] Re-throw exception from forked thread in


------------------------------------------
[...truncated 340.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is af5aef652cb0ffd2e4d4d1a71b16ad28
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 6:45:08 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-5kU7Hyrl28H7hvdoTVd39uKp33WIlhc8ZD64bUU4wzA.jar
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8522678397766649911.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nzjuqmCa_lTUZ040_QVeqJtii2ZKtrI20iTcL37msQo.jar
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash abdf42e26da4175cb906cbce369ef9635e6de55d2821827503a4577c62fe3847> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-q99C4m2kF1y5BsvONp75Y15t5V0oIYJ1A6RXfGL-OEc.pb
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_11_45_22-705998211072244175?project=apache-beam-testing
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_11_45_22-705998211072244175
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_11_45_22-705998211072244175
    Sep 21, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T18:45:25.500Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:31.316Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:31.975Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.015Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.045Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.126Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.158Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.183Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.517Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.590Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:00.782Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:00.808Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 21, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:06.367Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:11.035Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:36.825Z: Workers have started successfully.
    Sep 21, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:36.860Z: Workers have started successfully.
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.617Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.790Z: Cleaning up.
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.862Z: Stopping worker pool...
    Sep 21, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:49:41.520Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:49:41.562Z: Worker pool stopped.
    Sep 21, 2021 6:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_11_45_22-705998211072244175 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d2e5245a-460d-4f4e-94b8-a110f52be334 and timestamp: 2021-09-21T18:49:47.959000000Z:
                     Metric:                    Value:
                   read_time                     7.208
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 44.309 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/22wry22hcx2ji

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2452

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2452/display/redirect>

Changes:


------------------------------------------
[...truncated 336.56 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e50055417125b31f26e1108fef7d1958
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9199631328302929270.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qtJQ1nU_LVwc-bMkv5pcCvxZv0ZAzYRLUG99Adr53Bo.jar
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e34f7296faccfaf7745513c9c8f6c117f387ad5a10c8133a3c858ea05cb84898> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-409ylvrM-vd0VRPJyPbBF_OHrVoQyBM6PIWOoFy4SJg.pb
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_05_45_09-4200941418815551979?project=apache-beam-testing
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_05_45_09-4200941418815551979
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_05_45_09-4200941418815551979
    Sep 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T12:45:12.560Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:18.858Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.664Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.698Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.734Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.830Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.869Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.903Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:20.279Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:20.361Z: Starting 5 workers in us-central1-c...
    Sep 21, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:33.390Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:07.964Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:35.443Z: Workers have started successfully.
    Sep 21, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:35.474Z: Workers have started successfully.
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.566Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.787Z: Cleaning up.
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.917Z: Stopping worker pool...
    Sep 21, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:50:05.289Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:50:05.330Z: Worker pool stopped.
    Sep 21, 2021 12:50:11 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_05_45_09-4200941418815551979 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 93904afc-5048-4ad6-8983-056c3298e888 and timestamp: 2021-09-21T12:50:11.911000000Z:
                     Metric:                    Value:
                   read_time                     9.809
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:50:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 21.387 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mt2hzwi53hric

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2451

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2451/display/redirect?page=changes>

Changes:

[chamikaramj] Python support for directly using Java transforms using constructor and

[chamikaramj] Fixes yapf

[chamikaramj] Fixes lint

[chamikaramj] Addressing reviewer comments

[chamikaramj] Adds support for a field name format that will be ignored at expansion

[chamikaramj] Addresses reviewer comments

[chamikaramj] Use correct ignore field prefix in Python side


------------------------------------------
[...truncated 339.99 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 9 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e50055417125b31f26e1108fef7d1958
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 9'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 9'
Successfully started process 'Gradle Test Executor 9'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 6:45:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 6:46:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 6:46:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8512241474882420204.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-N3T6yMXWd4vU9P2BPRbMJm0QgxBhdlJ15cxIuaPmGqQ.jar
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash c124355bc851def2d9176381e34ffe6d6e29ff51e60486af60a828cc892ea38d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wSQ1W8hR3vLZF2OB40_-bW4p_1HmBIavYKgozIkuo40.pb
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_23_46_10-10554845187517000315?project=apache-beam-testing
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_23_46_10-10554845187517000315
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_23_46_10-10554845187517000315
    Sep 21, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T06:46:16.062Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:21.626Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.494Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.531Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.558Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.643Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.691Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.714Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:23.040Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:23.102Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:42.938Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:06.667Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:39.878Z: Workers have started successfully.
    Sep 21, 2021 6:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:39.913Z: Workers have started successfully.
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:06.920Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:07.036Z: Cleaning up.
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:07.099Z: Stopping worker pool...
    Sep 21, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:50:23.433Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:50:23.474Z: Worker pool stopped.
    Sep 21, 2021 6:50:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_23_46_10-10554845187517000315 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1e37e075-1024-4a77-ab3e-6166dc87494d and timestamp: 2021-09-21T06:50:28.994000000Z:
                     Metric:                    Value:
                   read_time                     8.339
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:50:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 9 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 35.969 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 8s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/ffn5mowaadkhq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2450

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2450/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Additional CoGBK tests.

[noreply] [BEAM-12803] Update deprecated use of _field_types (#15539)

[Robert Bradshaw] Move CoGBK tests into appropreate module.


------------------------------------------
[...truncated 338.38 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4dccc9788914c2d27a695eb8c5d03105
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6331853008731212232.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YC70iJPj_Rdvnh6iM_nUeFmoKYqMzWSWPerLpAZMR9M.jar
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 43788cc3b0ed4102ad1b13e5829121cdc205e16015bd5c53b166657a9f7eef9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Q3iMw7DtQQKtGxPlgpEhzcIF4WAVvVxTsWZlep9-750.pb
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_17_45_04-15111700924769977442?project=apache-beam-testing
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_17_45_04-15111700924769977442
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_17_45_04-15111700924769977442
    Sep 21, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T00:45:07.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:13.616Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.252Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.301Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.327Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.398Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.437Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.469Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.790Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.865Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:45.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:56.797Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:56.814Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 21, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:07.027Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:32.670Z: Workers have started successfully.
    Sep 21, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:32.698Z: Workers have started successfully.
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.532Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.667Z: Cleaning up.
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.732Z: Stopping worker pool...
    Sep 21, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:49:26.143Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:49:26.185Z: Worker pool stopped.
    Sep 21, 2021 12:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_17_45_04-15111700924769977442 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d992d211-7649-465a-8390-80dd62e59b54 and timestamp: 2021-09-21T00:49:33.364000000Z:
                     Metric:                    Value:
                   read_time                     8.605
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 46.265 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qhqeuealhlp4y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2449

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2449/display/redirect?page=changes>

Changes:

[danthev] Fix flaky test.

[danthev] Fix lint errors.


------------------------------------------
[...truncated 341.10 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4dccc9788914c2d27a695eb8c5d03105
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 6:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6833087523479725486.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-T_fyMn4O_viNZtHxu6t49Ixm9n1a9rctQNT_ylYHCRQ.jar
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 1e2f0f48131d49f93a6232d0f4591b2f9516fe4f6ade6fd0c0374df1eff8dd41> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Hi8PSBMdSfk6YjLQ9FkbL5UW_k9q3m_QwDdN8e_43UE.pb
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 6:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_11_45_25-14236663929264241551?project=apache-beam-testing
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_11_45_25-14236663929264241551
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_11_45_25-14236663929264241551
    Sep 20, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T18:45:29.646Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:36.337Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.252Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.291Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.323Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.397Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.463Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.500Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.923Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:38.017Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:08.754Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:11.080Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:11.112Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:21.360Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:46.062Z: Workers have started successfully.
    Sep 20, 2021 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:46.098Z: Workers have started successfully.
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.485Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.636Z: Cleaning up.
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.735Z: Stopping worker pool...
    Sep 20, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:49:41.896Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:49:41.944Z: Worker pool stopped.
    Sep 20, 2021 6:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_11_45_25-14236663929264241551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c2953af7-9466-40b3-a259-ecb9c63bd96e and timestamp: 2021-09-20T18:49:48.800000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.797

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:49:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 41.483 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/almlfcm2sl7aw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2448

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2448/display/redirect>

Changes:


------------------------------------------
[...truncated 338.15 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7769499112067325841.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qu_X7N3QbjtJXZki0B2PaWP8mpBRnGZrS80fVAajjhY.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 8a69b64dcbf8da07a1892edcd7d5c82dce72df1e1fad1e7147d2528951f5a94b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-imm2Tcv42gehiS7c19XILc5y3x4frR5xR9JSiVH1qUs.pb
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_05_45_05-6250377755695887449?project=apache-beam-testing
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_05_45_05-6250377755695887449
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_05_45_05-6250377755695887449
    Sep 20, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T12:45:10.026Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:15.234Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.153Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.185Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.219Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.296Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.324Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.347Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.682Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.761Z: Starting 5 workers in us-central1-a...
    Sep 20, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:50.100Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:05.463Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:31.254Z: Workers have started successfully.
    Sep 20, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:31.284Z: Workers have started successfully.
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:58.896Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:59.032Z: Cleaning up.
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:59.103Z: Stopping worker pool...
    Sep 20, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:49:16.508Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:49:16.555Z: Worker pool stopped.
    Sep 20, 2021 12:49:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_05_45_05-6250377755695887449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 88308b99-e8a2-4d5c-887d-020e57452cfa and timestamp: 2021-09-20T12:49:22.035000000Z:
                     Metric:                    Value:
                   read_time                     8.095
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:49:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 34.464 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/m5j2k4dd623jc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2447

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2447/display/redirect>

Changes:


------------------------------------------
[...truncated 340.32 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 6:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 6:46:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 6:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 6:46:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1657634523]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@53688365]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 6:46:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 6:46:43 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7183188378273839349.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mgZjTt6naUChfRxKldtKZjTKUPteKwyKmF9nMkyzV5s.jar
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 4 seconds
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash cc41988f5af6504fd7e3beb3570ca172c51ab19d5eb56a7850817ae10f0f9f93> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zEGYj1r2UE_X476zVwyhcsUasZ1etWp4UIF64Q8Pn5M.pb
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_23_46_51-13258525420254987853?project=apache-beam-testing
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_23_46_51-13258525420254987853
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_23_46_51-13258525420254987853
    Sep 20, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T06:46:55.234Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:02.911Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.558Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.589Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.618Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.693Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.728Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.761Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:04.160Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:04.239Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.400Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.643Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.661Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:43.939Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:48:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:09.297Z: Workers have started successfully.
    Sep 20, 2021 6:48:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:09.320Z: Workers have started successfully.
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.512Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.640Z: Cleaning up.
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.719Z: Stopping worker pool...
    Sep 20, 2021 6:50:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:50:55.307Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 6:50:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:50:55.353Z: Worker pool stopped.
    Sep 20, 2021 6:51:02 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_23_46_51-13258525420254987853 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f3ce399f-6652-4a1b-84d4-dbe9723458ab and timestamp: 2021-09-20T06:51:02.597000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.959

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:51:03 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 55.124 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 28s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kvqadneeh5zxe

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Mon Sep 13 06:44:48 UTC 2021.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.209 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2446

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2446/display/redirect>

Changes:


------------------------------------------
[...truncated 337.33 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4426132592971236140.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QoyyI3kA-wH4jxVC-pHdnJGnQBfa-IabpqcP35qDWpw.jar
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 79e55825c33f8a835ffa6548fb737e7e22279d368b1a178a8d89e1cbe4607272> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eeVYJcM_ioNf-mVI-3N-fiInnTaLGheKjYnhy-RgcnI.pb
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_17_45_04-16828390132500595680?project=apache-beam-testing
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_17_45_04-16828390132500595680
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_17_45_04-16828390132500595680
    Sep 20, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T00:45:08.100Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:15.297Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.133Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.173Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.209Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.270Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.295Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.319Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.599Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.675Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:32.094Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:47.154Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:47.176Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:57.473Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:24.753Z: Workers have started successfully.
    Sep 20, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:24.793Z: Workers have started successfully.
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.674Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.839Z: Cleaning up.
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.910Z: Stopping worker pool...
    Sep 20, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:49:12.687Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:49:12.730Z: Worker pool stopped.
    Sep 20, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_17_45_04-16828390132500595680 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): db669282-1e3e-44f4-8639-0640c7fa31b8 and timestamp: 2021-09-20T00:49:21.160000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.535

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 34.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/o6ouf5zpkyv3q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2445

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2445/display/redirect>

Changes:


------------------------------------------
[...truncated 337.53 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1653160675735518796.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-iEaPyr1lnZzA_UmSRUXQmZqPy8kQiRwdjh7sc0PoCU0.jar
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 8d6e01620c12c377ee6239240cec7d82c62380ebf854e8819204ec5eb64893fa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jW4BYgwSw3fuYjkkDOx9gsYjgOv4VOiBkgTsXrZIk_o.pb
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_11_45_07-12541032630070109918?project=apache-beam-testing
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_11_45_07-12541032630070109918
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_11_45_07-12541032630070109918
    Sep 19, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T18:45:10.752Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:16.403Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.162Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.203Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.241Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.317Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.352Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.387Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.741Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.820Z: Starting 5 workers in us-central1-a...
    Sep 19, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:39.325Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 6:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:56.154Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:22.128Z: Workers have started successfully.
    Sep 19, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:22.156Z: Workers have started successfully.
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:49.822Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:49.927Z: Cleaning up.
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:50.010Z: Stopping worker pool...
    Sep 19, 2021 6:49:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:49:06.639Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 6:49:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:49:06.690Z: Worker pool stopped.
    Sep 19, 2021 6:49:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_11_45_07-12541032630070109918 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 18942ca1-42c4-41c2-91fe-a5601972494c and timestamp: 2021-09-19T18:49:12.536000000Z:
                     Metric:                    Value:
                   read_time                     6.595
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:49:13 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 23.177 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 53s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2yqb7d7vumxq4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2444

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2444/display/redirect>

Changes:


------------------------------------------
[...truncated 337.50 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2385287251749945531.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-biuKFtfxebcbszDzssfOEHX30Yc7RM-ZnvJd_N3pXX0.jar
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash ec3685dd45af0a8234e46f2df0a7b05dd59e901163188197770e944b89a26c89> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7DaF3UWvCoI05G8t8KewXdWekBFjGIGXdw6US4mibIk.pb
    Sep 19, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_05_45_07-15457158388052488107?project=apache-beam-testing
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_05_45_07-15457158388052488107
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_05_45_07-15457158388052488107
    Sep 19, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T12:45:10.648Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:16.699Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.465Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.496Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.522Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.594Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.622Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.655Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.987Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:18.057Z: Starting 5 workers in us-central1-a...
    Sep 19, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:22.370Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:05.281Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:35.326Z: Workers have started successfully.
    Sep 19, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:35.354Z: Workers have started successfully.
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.720Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.859Z: Cleaning up.
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.920Z: Stopping worker pool...
    Sep 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:49:30.212Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:49:30.248Z: Worker pool stopped.
    Sep 19, 2021 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_05_45_07-15457158388052488107 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 946bdbb9-ad1a-4629-82a7-6aa6743acae4 and timestamp: 2021-09-19T12:49:35.342000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.918

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 46.36 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4kjmt3h3hs2n4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2443

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2443/display/redirect>

Changes:


------------------------------------------
[...truncated 341.79 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 6:45:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 6:45:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 6:45:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@127179475]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@868637325]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 6:46:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5363249692408335901.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-y-B-2DhdolkcxTsYTbZYx3rzcYpbpnp9bQwyiRUcOKM.jar
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 978c84376bd852fc1fde085c4f5a152de5fe925360e243553909af8e8a35a2fa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-l4yEN2vYUvwf3ghcT1oVLeX-klNg4kNVOQmvjoo1ovo.pb
    Sep 19, 2021 6:46:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_23_46_14-17703222293438978451?project=apache-beam-testing
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_23_46_14-17703222293438978451
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_23_46_14-17703222293438978451
    Sep 19, 2021 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T06:46:18.008Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:27.446Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.273Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.321Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.347Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.422Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.455Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.483Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.799Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.889Z: Starting 5 workers in us-central1-c...
    Sep 19, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:58.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:07.163Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:07.190Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 19, 2021 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:17.457Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:42.226Z: Workers have started successfully.
    Sep 19, 2021 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:42.260Z: Workers have started successfully.
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.097Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.231Z: Cleaning up.
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.310Z: Stopping worker pool...
    Sep 19, 2021 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:50:36.446Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:50:36.489Z: Worker pool stopped.
    Sep 19, 2021 6:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_23_46_14-17703222293438978451 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 70f996ce-5135-4209-aadf-ae60dd06f90f and timestamp: 2021-09-19T06:50:42.162000000Z:
                     Metric:                    Value:
                   read_time                     7.234
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 57.09 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/peudu53fkfzye

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Sun Sep 12 06:44:40 UTC 2021.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.83 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2442

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2442/display/redirect>

Changes:


------------------------------------------
[...truncated 339.45 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test659937586620404907.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SR_QdYYePgNPgrgs2medifMEAsLCBllgkgYMm5kJfGE.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 05ff7e70ecae048a27522839ee324f50718aa4f7a7a493521b27da3ef5c1505e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Bf9-cOyuBIonUig57jJPUHGKpPenpJNSGyfaPvXBUF4.pb
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_17_45_05-4344567043694350364?project=apache-beam-testing
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_17_45_05-4344567043694350364
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_17_45_05-4344567043694350364
    Sep 19, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T00:45:08.994Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.034Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.877Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.910Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.934Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.995Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.032Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.072Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.440Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.521Z: Starting 5 workers in us-central1-c...
    Sep 19, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:37.426Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:50.546Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:50.578Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 19, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:00.851Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:26.464Z: Workers have started successfully.
    Sep 19, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:26.520Z: Workers have started successfully.
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.172Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.534Z: Cleaning up.
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.630Z: Stopping worker pool...
    Sep 19, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:49:14.155Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:49:14.228Z: Worker pool stopped.
    Sep 19, 2021 12:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_17_45_05-4344567043694350364 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0b513e5e-647a-4717-a010-2f08d542684d and timestamp: 2021-09-19T00:49:54.400000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.793

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 6.486 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/bxeybgq7gulgm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2441

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2441/display/redirect>

Changes:


------------------------------------------
[...truncated 338.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 6:45:01 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5979530977339199524.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UYkmvEai9tqk1NB18q5tE65KTYKyeQG_fPS1avJAKWo.jar
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash ff9277464ca049e2305ccd0594126e954b43700e85cd983c4e244cbb4fe3ebc2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_5J3RkygSeIwXM0FlBJulUtDcA6FzZg8TiRMu0_j68I.pb
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_11_45_15-16673422427586790898?project=apache-beam-testing
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_11_45_15-16673422427586790898
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_11_45_15-16673422427586790898
    Sep 18, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T18:45:19.387Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:25.862Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.788Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.822Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.845Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.909Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.940Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.974Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:27.319Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:27.388Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:44.268Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:11.422Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:37.715Z: Workers have started successfully.
    Sep 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:37.767Z: Workers have started successfully.
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.698Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.845Z: Cleaning up.
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.934Z: Stopping worker pool...
    Sep 18, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:49:24.022Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:49:24.058Z: Worker pool stopped.
    Sep 18, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_11_45_15-16673422427586790898 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2df1b947-7957-404a-a6bb-ecfb2f4fe2a4 and timestamp: 2021-09-18T18:49:30.288000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.433

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 35.478 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4sgrafoyp3qho

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2440

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2440/display/redirect>

Changes:


------------------------------------------
[...truncated 338.63 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 12:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test508090196888883738.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j8lI1z78mx1qReg8w20b9ImbvZUa94tSU87P0rkRTXs.jar
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104004 bytes, hash c0595ef0cd231df1819f851e89b0e4f83ebe911b6543e7b429798027b86e9438> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wFle8M0jHfGBn4UeibDk-D6-kRtlQ-e0KXmAJ7hulDg.pb
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_05_45_14-6325844803791487056?project=apache-beam-testing
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_05_45_14-6325844803791487056
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_05_45_14-6325844803791487056
    Sep 18, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T12:45:17.859Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:24.925Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.865Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.918Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.953Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.024Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.060Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.094Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.446Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.508Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:39.478Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:13.666Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:39.620Z: Workers have started successfully.
    Sep 18, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:39.644Z: Workers have started successfully.
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.254Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.384Z: Cleaning up.
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.463Z: Stopping worker pool...
    Sep 18, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:49:41.585Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:49:41.626Z: Worker pool stopped.
    Sep 18, 2021 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_05_45_14-6325844803791487056 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 058dd872-bbd4-4445-b23f-91500d0acd92 and timestamp: 2021-09-18T12:49:47.520000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.455

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:49:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 52.004 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wrv4jwmeuvrfw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2439

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2439/display/redirect>

Changes:


------------------------------------------
[...truncated 339.46 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 6:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4040703456648660585.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2ZQ1RyJ5WY-qkJE4NhYl5MZjBgQtRsIm40FC8zwqkmM.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 148d6269386e03b8acc83a543743e9acd29909f92967069cc060211c436c7640> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FI1iaThuA7isyDpUN0PprNKZCfkpZwacwGAhHENsdkA.pb
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_23_45_13-8888735768678419727?project=apache-beam-testing
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_23_45_13-8888735768678419727
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_23_45_13-8888735768678419727
    Sep 18, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T06:45:19.954Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:25.576Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.327Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.358Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.385Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.434Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.460Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.485Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.803Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.876Z: Starting 5 workers in us-central1-a...
    Sep 18, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:53.746Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 6:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:13.043Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:38.824Z: Workers have started successfully.
    Sep 18, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:38.850Z: Workers have started successfully.
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.775Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.897Z: Cleaning up.
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.976Z: Stopping worker pool...
    Sep 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:49:26.810Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:49:26.856Z: Worker pool stopped.
    Sep 18, 2021 6:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_23_45_13-8888735768678419727 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ff13ac3-1a6e-4c0c-a574-0adff4797089 and timestamp: 2021-09-18T06:49:32.795000000Z:
                     Metric:                    Value:
                   read_time                     8.761
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 39.42 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ioqdzwgbbysx6

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Sat Sep 11 06:44:26 UTC 2021.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.341 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2438

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2438/display/redirect?page=changes>

Changes:

[noreply] Minor: Prune docker volumes in Inventory job(#15532)


------------------------------------------
[...truncated 340.14 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 12:45:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5940345697793417465.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-T8rrH2TELNbuh4XgaDr5Y23V20HI5R9g9FbPaOVIv30.jar
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 96581bf42c7128cd2a50f61fe3911b17cca1e72d224acb344417760ce6e5ebb6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-llgb9CxxKM0qUPYf45EbF8yh5y0iSss0RBd2DObl67Y.pb
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_17_45_35-301988535343975277?project=apache-beam-testing
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_17_45_35-301988535343975277
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_17_45_35-301988535343975277
    Sep 18, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T00:45:38.481Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:48.886Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.838Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.912Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.945Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.014Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.038Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.066Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.438Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.832Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:46:12.224Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:46:35.610Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:01.755Z: Workers have started successfully.
    Sep 18, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:01.792Z: Workers have started successfully.
    Sep 18, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.372Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.504Z: Cleaning up.
    Sep 18, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.614Z: Stopping worker pool...
    Sep 18, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:49:53.807Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:49:53.850Z: Worker pool stopped.
    Sep 18, 2021 12:50:00 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_17_45_35-301988535343975277 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5054a4b3-af77-46fa-95ea-0d11e7efa0d0 and timestamp: 2021-09-18T00:50:00.579000000Z:
                     Metric:                    Value:
                   read_time                     8.551
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:50:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 45.26 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/7ronkkvanjuec

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2437

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2437/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-12740] Add option to CreateOptions to avoid GetObjectMetadata for


------------------------------------------
[...truncated 344.71 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 17, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 17, 2021 6:45:25 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 17, 2021 6:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 17, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 17, 2021 6:45:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 17, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 17, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 17, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 17, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8192769708742958485.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--eo5hjTkA_gOYJj5pPo8Vv-4OWVNuJNTdDF9HuJYPOA.jar
    Sep 17, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 17, 2021 6:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 17, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 8bf3e6103fdb273470e3467c8f94bb2295d81284598b0a5d04cac59d23f8142f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-i_PmED_bJzRw40Z8j5S7IpXYEoRZiwpdBMrFnSP4FC8.pb
    Sep 17, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 17, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 17, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 17, 2021 6:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 17, 2021 6:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_11_45_39-6540985896134660808?project=apache-beam-testing
    Sep 17, 2021 6:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_11_45_39-6540985896134660808
    Sep 17, 2021 6:45:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_11_45_39-6540985896134660808
    Sep 17, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-17T18:45:42.635Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 17, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:49.060Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:49.854Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:49.891Z: Expanding GroupByKey operations into optimizable parts.
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:49.920Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:50.004Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:50.074Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:50.146Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:50.547Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:45:50.626Z: Starting 5 workers in us-central1-c...
    Sep 17, 2021 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:11.576Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 17, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:24.034Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:24.069Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 17, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:34.376Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:58.408Z: Workers have started successfully.
    Sep 17, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:46:58.490Z: Workers have started successfully.
    Sep 17, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:47:30.285Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:47:30.400Z: Cleaning up.
    Sep 17, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:47:30.470Z: Stopping worker pool...
    Sep 17, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:49:54.511Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 17, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T18:49:54.541Z: Worker pool stopped.
    Sep 17, 2021 6:50:02 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_11_45_39-6540985896134660808 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 70ab0286-aff4-4587-b427-647593c2fb91 and timestamp: 2021-09-17T18:50:02.353000000Z:
                     Metric:                    Value:
                   read_time                    11.039
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 6:50:03 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 42.967 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 43s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/hzjxpuraomcyk

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2436

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2436/display/redirect>

Changes:


------------------------------------------
[...truncated 337.56 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 91a68d30729b97c362c4d1b5908c850f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 17, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 17, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 17, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@977492762]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@271342901]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 17, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 17, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1363837139006503109.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2TjWzeVf5-wlCDKk8P0aezQ9m2xzP9nubgCXb4-Ui4o.jar
    Sep 17, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 6 seconds
    Sep 17, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 17, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 274c09b88f0e490b31e4e90aadacd3736b384135a75ba8c0c86dcc33e3f9a73d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J0wJuI8OSQsx5OkKrazTc2s4QTWnW6jAyG3MM-P5pz0.pb
    Sep 17, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 17, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 17, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 17, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 17, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_05_45_16-15022222221004330035?project=apache-beam-testing
    Sep 17, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_05_45_16-15022222221004330035
    Sep 17, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_05_45_16-15022222221004330035
    Sep 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-17T12:45:19.753Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 17, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:31.791Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.606Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.643Z: Expanding GroupByKey operations into optimizable parts.
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.675Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.757Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.792Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:32.827Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:33.146Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:33.236Z: Starting 5 workers in us-central1-c...
    Sep 17, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:45:52.877Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 17, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:46:06.222Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:46:06.247Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 17, 2021 12:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:46:16.517Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:46:41.355Z: Workers have started successfully.
    Sep 17, 2021 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:46:41.400Z: Workers have started successfully.
    Sep 17, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:47:11.540Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:47:11.666Z: Cleaning up.
    Sep 17, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:47:11.729Z: Stopping worker pool...
    Sep 17, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:49:30.038Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 17, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T12:49:30.078Z: Worker pool stopped.
    Sep 17, 2021 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_05_45_16-15022222221004330035 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b13ac26c-8f0d-4bde-ae06-cc152641018b and timestamp: 2021-09-17T12:49:35.883000000Z:
                     Metric:                    Value:
                   read_time                     8.328
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 12:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.45 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3qjiegdf266wg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2435

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2435/display/redirect>

Changes:


------------------------------------------
[...truncated 337.61 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 91a68d30729b97c362c4d1b5908c850f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 17, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 17, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 17, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 17, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@323141774]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1176308843]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 17, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 17, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 17, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 17, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2550726164129370281.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-deWHtMR5wmzwPK5pnKKjd5F6Ir4u_pC0SMdGES9dNTo.jar
    Sep 17, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Sep 17, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 1 seconds
    Sep 17, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 17, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 5f0a7955fc72cd956d32fe88bca969cb83e2048471bafdb8015edee863e8a5c1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Xwp5VfxyzZVtMv6IvKlpy4PiBIRxuv24AV7e6GPopcE.pb
    Sep 17, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 17, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-16_23_45_12-4141909358847894549?project=apache-beam-testing
    Sep 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-16_23_45_12-4141909358847894549
    Sep 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-16_23_45_12-4141909358847894549
    Sep 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-17T06:45:15.494Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 17, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:22.468Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.147Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.188Z: Expanding GroupByKey operations into optimizable parts.
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.218Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.282Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.307Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.336Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.671Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:23.755Z: Starting 5 workers in us-central1-a...
    Sep 17, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:45:45.289Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 17, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:04.814Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:30.897Z: Workers have started successfully.
    Sep 17, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:30.922Z: Workers have started successfully.
    Sep 17, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:58.993Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:59.140Z: Cleaning up.
    Sep 17, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:46:59.218Z: Stopping worker pool...
    Sep 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:49:21.759Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T06:49:21.805Z: Worker pool stopped.
    Sep 17, 2021 6:49:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-16_23_45_12-4141909358847894549 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 159bfd6e-8072-4819-89bf-30c1e33f1379 and timestamp: 2021-09-17T06:49:28.583000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.111

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 6:49:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.043 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 37.912 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/f75ktoyzyggec

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2434

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2434/display/redirect?page=changes>

Changes:

[zyichi] [BEAM-12603] Add retries to FnApiRunnerTest due to flakiness of grpc

[noreply] [BEAM-12535] add dataframes notebook (#15470)


------------------------------------------
[...truncated 336.31 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 91a68d30729b97c362c4d1b5908c850f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 17, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 17, 2021 12:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 17, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@977492762]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@271342901]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 17, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 17, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 17, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 17, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 17, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 17, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6936447019475044792.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-X9fphTc1zSRa_g-Bj2ZZLxeVgEiQq_nRNvSccdZ1mEo.jar
    Sep 17, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 17, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 17, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 75107f9df1418335b630d007da2d3678d6db15dfc5e3a4c2d164dab9f9ff7041> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dRB_nfFBgzW2MNAH2i02eNbbFd_F46TC0WTaufn_cEE.pb
    Sep 17, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 17, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 17, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 17, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 17, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-16_17_45_13-13748401065240882438?project=apache-beam-testing
    Sep 17, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-16_17_45_13-13748401065240882438
    Sep 17, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-16_17_45_13-13748401065240882438
    Sep 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-17T00:45:16.283Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 17, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:23.434Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.230Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.267Z: Expanding GroupByKey operations into optimizable parts.
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.301Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.399Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.429Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.476Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.845Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:24.918Z: Starting 5 workers in us-central1-a...
    Sep 17, 2021 12:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:45:38.937Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 17, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:46:08.175Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:46:34.385Z: Workers have started successfully.
    Sep 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:46:34.423Z: Workers have started successfully.
    Sep 17, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:47:02.481Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 17, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:47:02.649Z: Cleaning up.
    Sep 17, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:47:02.738Z: Stopping worker pool...
    Sep 17, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:49:29.200Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 17, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-17T00:49:29.281Z: Worker pool stopped.
    Sep 17, 2021 12:49:55 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-16_17_45_13-13748401065240882438 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7b0d8838-553a-4bfb-9641-87991658ade7 and timestamp: 2021-09-17T00:49:55.837000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.994

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 17, 2021 12:49:56 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 2.907 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cj5di3lxo7w7g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2433

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2433/display/redirect?page=changes>

Changes:

[randomstep] [BEAM-12899] Upgrade Gradle to version 6.9.x

[noreply] [BEAM-12701] Added extra parameter in to_csv for DeferredFrame to name


------------------------------------------
[...truncated 361.32 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 91a68d30729b97c362c4d1b5908c850f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 16, 2021 6:49:35 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 16, 2021 6:49:38 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 16, 2021 6:49:39 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 16, 2021 6:49:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:49:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:49:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:49:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:49:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1125485903]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 16, 2021 6:49:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 6:49:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:49:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 6:49:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:49:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1764687859]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 16, 2021 6:49:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 16, 2021 6:50:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 16, 2021 6:50:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 16, 2021 6:50:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 16, 2021 6:50:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6402661844907658558.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-WvfikpCOqWWHTNuZNBQ4s8fYjx9nGhHU8VQ3zON4Euk.jar
    Sep 16, 2021 6:50:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 3 seconds
    Sep 16, 2021 6:50:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 16, 2021 6:50:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash ae7b5328ca459a1762cece2fd3ff70d5ea7f4a372fbf3966ab0c0c544a50ac02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rntTKMpFmhdizs4v0_9w1ep_SjcvvzlmqwwMVEpQrAI.pb
    Sep 16, 2021 6:50:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 16, 2021 6:50:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 16, 2021 6:50:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 16, 2021 6:50:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 16, 2021 6:50:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-16_11_50_14-4695151237778241422?project=apache-beam-testing
    Sep 16, 2021 6:50:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-16_11_50_14-4695151237778241422
    Sep 16, 2021 6:50:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-16_11_50_14-4695151237778241422
    Sep 16, 2021 6:50:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-16T18:50:23.675Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 16, 2021 6:50:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:37.850Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 16, 2021 6:50:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:39.155Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 16, 2021 6:50:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:39.481Z: Expanding GroupByKey operations into optimizable parts.
    Sep 16, 2021 6:50:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:39.749Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 16, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:40.173Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 16, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:40.266Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 16, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:40.366Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 16, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:41.650Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:50:41.793Z: Starting 5 workers in us-central1-c...
    Sep 16, 2021 6:51:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:51:06.017Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 16, 2021 6:51:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:51:23.104Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 6:51:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:51:52.946Z: Workers have started successfully.
    Sep 16, 2021 6:51:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:51:52.984Z: Workers have started successfully.
    Sep 16, 2021 6:52:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:52:40.453Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 6:52:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:52:41.990Z: Cleaning up.
    Sep 16, 2021 6:52:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:52:43.347Z: Stopping worker pool...
    Sep 16, 2021 6:55:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:55:08.150Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 16, 2021 6:55:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T18:55:08.367Z: Worker pool stopped.
    Sep 16, 2021 6:55:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-16_11_50_14-4695151237778241422 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1416c547-bf94-4053-ad53-9a220c44c91b and timestamp: 2021-09-16T18:55:21.909000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    16.753

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 6:55:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 59.596 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 50s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/hhbkuvdcjigp4

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2432

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2432/display/redirect>

Changes:


------------------------------------------
[...truncated 341.16 KB...]
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 16, 2021 12:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 16, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@799287534]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1883029333]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 16, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 16, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5057746223146241907.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XOEh0C7VSvwz47bI8STE9p8ozRWqb3k7imOFA5KlQ6c.jar
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Sep 16, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 1 seconds
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 513c87cfb263bd332e090c542d31bdc8d487fa52b98bf052789417abfb1c7fa1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UTyHz7JjvTMuCQxULTG9yNSH-lK5i_BSeJQXq_scf6E.pb
    Sep 16, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-16_05_45_11-14919211802293176260?project=apache-beam-testing
    Sep 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-16_05_45_11-14919211802293176260
    Sep 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-16_05_45_11-14919211802293176260
    Sep 16, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-16T12:45:14.700Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 16, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:29.214Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.067Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.152Z: Expanding GroupByKey operations into optimizable parts.
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.208Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.322Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.370Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:30.412Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:31.081Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:45:31.225Z: Starting 5 workers in us-central1-c...
    Sep 16, 2021 12:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:00.219Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 16, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:05.286Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:05.353Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 16, 2021 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:15.704Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:41.005Z: Workers have started successfully.
    Sep 16, 2021 12:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:46:41.051Z: Workers have started successfully.
    Sep 16, 2021 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:47:16.525Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:47:16.787Z: Cleaning up.
    Sep 16, 2021 12:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:47:16.896Z: Stopping worker pool...
    Sep 16, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:49:41.156Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 16, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T12:49:41.249Z: Worker pool stopped.
    Sep 16, 2021 12:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-16_05_45_11-14919211802293176260 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 66159334-5e5e-4e13-bfce-f9f802171d10 and timestamp: 2021-09-16T12:49:48.939000000Z:
                     Metric:                    Value:
                   read_time                    13.629
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 12:49:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 56.972 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/opq4x3jtu6pf2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2431

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2431/display/redirect?page=changes>

Changes:

[ajamato] [BEAM-12898] Disable Flink Load tests which are leading Dataproc


------------------------------------------
[...truncated 338.56 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2277531f8cb1c01e0194391663a4e766
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 16, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 16, 2021 6:45:07 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 16, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 16, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 16, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 16, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 16, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 16, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 16, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2501984044205675756.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DkuTZqyrOHQzTTH2U7X2_3KF5Ys2D8E_rncb0cli_rE.jar
    Sep 16, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 16, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 16, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 1f620931db966be012c886243848163cd2508005904362d253b847356f4483a8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-H2IJMduWa-ASyIYkOEgWPNJQgAWQQ2LSU7hHNW9Eg6g.pb
    Sep 16, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 16, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 16, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 16, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 16, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-15_23_45_22-3612431813213432821?project=apache-beam-testing
    Sep 16, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-15_23_45_22-3612431813213432821
    Sep 16, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-15_23_45_22-3612431813213432821
    Sep 16, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-16T06:45:25.657Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:34.316Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.075Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.148Z: Expanding GroupByKey operations into optimizable parts.
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.208Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.340Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.382Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:35.425Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:36.015Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:36.146Z: Starting 5 workers in us-central1-c...
    Sep 16, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:45:50.433Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 16, 2021 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:46:07.302Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:46:07.355Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 16, 2021 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:46:17.620Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 6:46:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:46:42.606Z: Workers have started successfully.
    Sep 16, 2021 6:46:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:46:42.647Z: Workers have started successfully.
    Sep 16, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:47:14.950Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:47:15.132Z: Cleaning up.
    Sep 16, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:47:15.291Z: Stopping worker pool...
    Sep 16, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:49:27.384Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 16, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T06:49:27.439Z: Worker pool stopped.
    Sep 16, 2021 6:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-15_23_45_22-3612431813213432821 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 39606f97-94cd-4f1d-b25a-62181c45ba1c and timestamp: 2021-09-16T06:49:35.095000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.362

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 6:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 34.374 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qz3iavuu2quwu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2430

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2430/display/redirect?page=changes>

Changes:

[dpcollins] [BEAM-12882] - fix test that is flaky when jenkins is overloaded

[noreply] [BEAM-12543] Fix DataFrrame typo (#15509)

[noreply] [BEAM-12794] Remove obsolete uses of sys.exc_info. (#15507)

[noreply] [BEAM-11666]  flake on RecordingManagerTest (#15118)

[kawaigin] [BEAM-10708] Introspect beam_sql output

[noreply] Minor: Restore "Bugfix" section in CHANGES.md (#15516)

[Kyle Weaver] [BEAM-10459] Unignore numeric aggregation tests.


------------------------------------------
[...truncated 353.41 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2277531f8cb1c01e0194391663a4e766
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 16, 2021 12:50:05 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 16, 2021 12:50:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 16, 2021 12:50:10 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 16, 2021 12:50:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:50:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:50:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:50:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:50:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1711003349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:50:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1656216931]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 12:50:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:50:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 16, 2021 12:50:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 16, 2021 12:50:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 16, 2021 12:50:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 16, 2021 12:50:37 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 16, 2021 12:50:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 16, 2021 12:50:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 16, 2021 12:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 16, 2021 12:50:46 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 16, 2021 12:50:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5890461711699957094.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PRJcv0kQfXeu_NbA-hs-AwOOn27aWtugRvDgDA0K7og.jar
    Sep 16, 2021 12:50:52 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 5 seconds
    Sep 16, 2021 12:50:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 16, 2021 12:50:52 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 08a72176caefa466e2c4b3e58f0e878f9e355fe082139f4ab7a6c60171593fe6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-CKchdsrvpGbixLPljw6Hj541X-CCE59Kt6bGAXFZP-Y.pb
    Sep 16, 2021 12:50:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 16, 2021 12:50:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 16, 2021 12:50:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 16, 2021 12:51:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 16, 2021 12:51:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-15_17_51_00-16927309341686112669?project=apache-beam-testing
    Sep 16, 2021 12:51:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-15_17_51_00-16927309341686112669
    Sep 16, 2021 12:51:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-15_17_51_00-16927309341686112669
    Sep 16, 2021 12:51:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-16T00:51:04.087Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:11.725Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.495Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.543Z: Expanding GroupByKey operations into optimizable parts.
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.584Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.697Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.736Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:12.770Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:13.230Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 12:51:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:13.324Z: Starting 5 workers in us-central1-c...
    Sep 16, 2021 12:51:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:23.628Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 16, 2021 12:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:43.204Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 12:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:43.243Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 16, 2021 12:51:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:51:53.545Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 16, 2021 12:52:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:52:32.723Z: Workers have started successfully.
    Sep 16, 2021 12:52:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:52:32.759Z: Workers have started successfully.
    Sep 16, 2021 12:53:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:53:06.315Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 16, 2021 12:53:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:53:06.520Z: Cleaning up.
    Sep 16, 2021 12:53:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:53:06.665Z: Stopping worker pool...
    Sep 16, 2021 12:55:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:55:21.681Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 16, 2021 12:55:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-16T00:55:21.727Z: Worker pool stopped.
    Sep 16, 2021 12:55:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-15_17_51_00-16927309341686112669 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b2d20870-6a8d-4553-8af9-d5f8131a9cf8 and timestamp: 2021-09-16T00:55:27.632000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.357

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 16, 2021 12:55:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 36.102 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 10s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/uhngoaqikozim

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2429

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2429/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12885] Enable NeedsRunner Tests for Samza Portable Runner (#15512)

[noreply] [BEAM-12100][BEAM-10379][BEAM-9514][BEAM-12647][BEAM-12099]


------------------------------------------
[...truncated 382.00 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7c8314d0b1ab331a864b535bbeee5df5
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 29'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 29'
Successfully started process 'Gradle Test Executor 29'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 15, 2021 6:57:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 15, 2021 6:57:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 15, 2021 6:57:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 15, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@280858478]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 15, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 15, 2021 6:58:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-jNBHRplzujunR2ca_W33DKopb0I1N66c2TzlUZA6-A8.jar
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-d_4G4rn4Rll5Aje4hNipJhUiscPvjW925Yx4SqBJtnY.jar
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3754504656764621449.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0XwWZp4LZjZJ7MBWiORdn18Au1rd_adIdZv51viV4cc.jar
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Sep 15, 2021 6:58:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 15, 2021 6:58:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 6c1ac708e8e335b1f27f858cb7b4abb8c202645bef25a08f3344df26896988d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-bBrHCOjjNbHyf4WMt7SruMICZFvvJaCPM0TfJolpiNY.pb
    Sep 15, 2021 6:58:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 15, 2021 6:58:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 15, 2021 6:58:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 15, 2021 6:58:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 15, 2021 6:58:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-15_11_58_09-1410383782212129160?project=apache-beam-testing
    Sep 15, 2021 6:58:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-15_11_58_09-1410383782212129160
    Sep 15, 2021 6:58:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-15_11_58_09-1410383782212129160
    Sep 15, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-15T18:58:13.447Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:21.157Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:21.938Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:21.968Z: Expanding GroupByKey operations into optimizable parts.
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:21.988Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:22.054Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:22.082Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:22.126Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:22.471Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 6:58:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:22.544Z: Starting 5 workers in us-central1-a...
    Sep 15, 2021 6:58:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:58:53.924Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 15, 2021 6:59:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:59:18.647Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 6:59:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:59:18.672Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 15, 2021 6:59:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:59:41.718Z: Workers have started successfully.
    Sep 15, 2021 6:59:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:59:41.749Z: Workers have started successfully.
    Sep 15, 2021 6:59:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T18:59:50.070Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 7:00:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T19:00:12.344Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 7:00:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T19:00:12.478Z: Cleaning up.
    Sep 15, 2021 7:00:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T19:00:12.548Z: Stopping worker pool...
    Sep 15, 2021 7:02:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T19:02:40.064Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 15, 2021 7:02:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T19:02:40.123Z: Worker pool stopped.
    Sep 15, 2021 7:02:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-15_11_58_09-1410383782212129160 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 626a92d7-cf90-4df2-a8b6-dc39b613242c and timestamp: 2021-09-15T19:02:47.866000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.64

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 7:02:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 29 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 57.531 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 18m 28s
152 actionable tasks: 136 executed, 16 from cache

Publishing build scan...
https://gradle.com/s/shar7cztvl3es

Stopped 11 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2428

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2428/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12153] revert "implement GroupByKey with CombinePerKey with

[noreply] [BEAM-12845] Add AWS services as a runtime dependency to Spark Job


------------------------------------------
[...truncated 358.36 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 6 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 87fb8cbd9bf06acd483f26e1ccaafa5a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 15, 2021 12:47:25 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 15, 2021 12:47:26 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 15, 2021 12:47:26 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 15, 2021 12:47:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:47:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:47:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:47:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:47:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 15, 2021 12:47:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 15, 2021 12:47:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 15, 2021 12:47:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 15, 2021 12:47:36 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 15, 2021 12:47:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4874788525819239956.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gTwDvKiHwPyHLjgQ_xJVguy4PPe8vdUuQ6_w78pCYXs.jar
    Sep 15, 2021 12:47:37 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 15, 2021 12:47:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 15, 2021 12:47:37 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 32fd05be82c3d3d46105e89e00b53042f1c5f6c5468481c07401c0414c1973d8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Mv0FvoLD09RhBeieALUwQvHF9sVGhIHAdAHAQUwZc9g.pb
    Sep 15, 2021 12:47:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 15, 2021 12:47:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 15, 2021 12:47:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 15, 2021 12:47:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 15, 2021 12:47:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-15_05_47_39-14570986282291584778?project=apache-beam-testing
    Sep 15, 2021 12:47:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-15_05_47_39-14570986282291584778
    Sep 15, 2021 12:47:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-15_05_47_39-14570986282291584778
    Sep 15, 2021 12:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-15T12:47:43.371Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 15, 2021 12:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:47.002Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 15, 2021 12:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:47.704Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:47.790Z: Expanding GroupByKey operations into optimizable parts.
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:47.843Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:48.072Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:48.121Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:48.163Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:48.502Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 12:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:47:48.572Z: Starting 5 workers in us-central1-a...
    Sep 15, 2021 12:48:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:48:22.043Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 15, 2021 12:48:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:48:34.256Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 12:49:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:49:01.372Z: Workers have started successfully.
    Sep 15, 2021 12:49:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:49:01.403Z: Workers have started successfully.
    Sep 15, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:49:32.680Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:49:32.824Z: Cleaning up.
    Sep 15, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:49:32.911Z: Stopping worker pool...
    Sep 15, 2021 12:51:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:51:59.347Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 15, 2021 12:51:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T12:51:59.387Z: Worker pool stopped.
    Sep 15, 2021 12:52:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-15_05_47_39-14570986282291584778 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1d5598e4-5eac-48c0-a20b-bba8e69d4655 and timestamp: 2021-09-15T12:52:06.445000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.414

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 12:52:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 45.209 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 45s
152 actionable tasks: 112 executed, 40 from cache

Publishing build scan...
https://gradle.com/s/yq4jfemvsxskm

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2427

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2427/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12876] Adding doc and glossary entry for resource hints (#15499)


------------------------------------------
[...truncated 338.45 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 50340803db392ee5f0c7210f39a33b6d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 15, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 15, 2021 6:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 15, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 15, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 15, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 15, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 15, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1508267424378395638.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-t-GsRI2oOmC8XMemse_AYlD077VmtZNdXz46aZbMS2U.jar
    Sep 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 3e4609834e00ee97ce09fe3ebb283464c8fd45a84ae3028311cb2f2f90e1d684> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PkYJg04A7pfOCf4-uyg0ZMj9RahK4wKDEcsvL5Dh1oQ.pb
    Sep 15, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 15, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 15, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 15, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-14_23_45_16-6956103674034291982?project=apache-beam-testing
    Sep 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-14_23_45_16-6956103674034291982
    Sep 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-14_23_45_16-6956103674034291982
    Sep 15, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-15T06:45:19.649Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 15, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:26.521Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.445Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.477Z: Expanding GroupByKey operations into optimizable parts.
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.520Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.595Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.632Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.666Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:27.992Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:28.090Z: Starting 5 workers in us-central1-c...
    Sep 15, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:36.135Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 15, 2021 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:58.340Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 6:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:45:58.364Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 15, 2021 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:46:08.569Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:46:32.932Z: Workers have started successfully.
    Sep 15, 2021 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:46:32.960Z: Workers have started successfully.
    Sep 15, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:47:02.125Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:47:02.250Z: Cleaning up.
    Sep 15, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:47:02.335Z: Stopping worker pool...
    Sep 15, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:49:26.307Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 15, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T06:49:26.342Z: Worker pool stopped.
    Sep 15, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-14_23_45_16-6956103674034291982 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0d25d108-b7c2-4cae-a366-b5d8933b1a71 and timestamp: 2021-09-15T06:49:31.473000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.224

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 35.526 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/k2g6epnqkxhig

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2426

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2426/display/redirect?page=changes>

Changes:

[dhuntsperger] updated Maven-to-Gradle conversion step in Java quickstart

[noreply] [BEAM-10913] - Updating Grafana from v6.7.3 to v8.1.2 (#15503)


------------------------------------------
[...truncated 337.32 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 50340803db392ee5f0c7210f39a33b6d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 15, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 15, 2021 12:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 15, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 15, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 15, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 15, 2021 12:45:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 15, 2021 12:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 15, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 15, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 15, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 15, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7177806670991651038.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-X5XGp7BbdKofL5M3t1oJzMV0ZAKan7JAf7nTUN0SiPw.jar
    Sep 15, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 15, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 15, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 519ca277e1e0998cfb67febfcc1e33bc2b6b8a688f61f619a380e6d66c66326c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UZyid-HgmYz7Z_6_zB4zvCtrimiPYfYZo4Dm1mxmMmw.pb
    Sep 15, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 15, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 15, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 15, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 15, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-14_17_45_17-675699696737844118?project=apache-beam-testing
    Sep 15, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-14_17_45_17-675699696737844118
    Sep 15, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-14_17_45_17-675699696737844118
    Sep 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-15T00:45:20.787Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 15, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:34.452Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.431Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.463Z: Expanding GroupByKey operations into optimizable parts.
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.501Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.574Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.602Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:35.635Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:36.004Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:36.095Z: Starting 5 workers in us-central1-c...
    Sep 15, 2021 12:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:45:52.398Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 15, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:46:20.785Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 15, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:46:51.440Z: Workers have started successfully.
    Sep 15, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:46:51.479Z: Workers have started successfully.
    Sep 15, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:47:22.105Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 15, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:47:22.295Z: Cleaning up.
    Sep 15, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:47:22.409Z: Stopping worker pool...
    Sep 15, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:49:44.822Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 15, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-15T00:49:44.878Z: Worker pool stopped.
    Sep 15, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-14_17_45_17-675699696737844118 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f4c762cf-36d5-4008-80fe-b052bee83311 and timestamp: 2021-09-15T00:49:51.298000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.062

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 15, 2021 12:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.039 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 53.857 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xntrnciymt4o2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2425

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2425/display/redirect>

Changes:


------------------------------------------
[...truncated 337.59 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 50340803db392ee5f0c7210f39a33b6d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 14, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 14, 2021 6:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 14, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 14, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 14, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 14, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 14, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 14, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 14, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 14, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3266677501138577389.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w30y57adrBjTMjjiNGAUi9tKtDpw1Ti_nJ60PUS7xuY.jar
    Sep 14, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 14, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 14, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 80f7baa768dfcd015af5ffa368b4e3f62cbb8ad533c90bbe89da4f5b07815aff> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gPe6p2jfzQFa9f-jaLTj9iy7itUzyQu-idpPWweBWv8.pb
    Sep 14, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 14, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 14, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 14, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-14_11_45_14-8029357531428479480?project=apache-beam-testing
    Sep 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-14_11_45_14-8029357531428479480
    Sep 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-14_11_45_14-8029357531428479480
    Sep 14, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-14T18:45:17.810Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 14, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:23.630Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.484Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.517Z: Expanding GroupByKey operations into optimizable parts.
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.548Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.617Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.638Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:24.661Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:25.028Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:25.097Z: Starting 5 workers in us-central1-a...
    Sep 14, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:45:34.013Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 14, 2021 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:46:08.638Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:46:34.989Z: Workers have started successfully.
    Sep 14, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:46:35.018Z: Workers have started successfully.
    Sep 14, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:47:04.622Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:47:04.780Z: Cleaning up.
    Sep 14, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:47:04.849Z: Stopping worker pool...
    Sep 14, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:49:21.124Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 14, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T18:49:21.169Z: Worker pool stopped.
    Sep 14, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-14_11_45_14-8029357531428479480 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5d4c5175-56c9-4858-a5d2-ddc9283bec6f and timestamp: 2021-09-14T18:49:28.965000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.196

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 6:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 34.923 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/avqtz2lckr3em

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2424

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2424/display/redirect>

Changes:


------------------------------------------
[...truncated 338.53 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 50340803db392ee5f0c7210f39a33b6d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 14, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 14, 2021 12:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 14, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 14, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@911883870]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1058462591]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 14, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 14, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 14, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 14, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 14, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8340718730747719098.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KCoGKIaDBca9RILfrN52yQ_6X7hVo6qXD09WT4ntTUs.jar
    Sep 14, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 14, 2021 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 14, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 10396efb33699286baaf518c503544d219372589011a4e97f276e196d0bd6fd4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EDlu-zNpkoa6r1GMUDVE0hk3JYkBGk6X8nbhltC9b9Q.pb
    Sep 14, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 14, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 14, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 14, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 14, 2021 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-14_05_45_27-4524187170090040190?project=apache-beam-testing
    Sep 14, 2021 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-14_05_45_27-4524187170090040190
    Sep 14, 2021 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-14_05_45_27-4524187170090040190
    Sep 14, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-14T12:45:30.718Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 14, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:38.192Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.130Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.169Z: Expanding GroupByKey operations into optimizable parts.
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.190Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.271Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.299Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.332Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.663Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:39.733Z: Starting 5 workers in us-central1-c...
    Sep 14, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:45:57.279Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 14, 2021 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:46:14.997Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:46:15.030Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 14, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:46:25.226Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:46:49.598Z: Workers have started successfully.
    Sep 14, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:46:49.628Z: Workers have started successfully.
    Sep 14, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:47:21.648Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:47:21.802Z: Cleaning up.
    Sep 14, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:47:21.881Z: Stopping worker pool...
    Sep 14, 2021 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:49:43.657Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 14, 2021 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T12:49:43.734Z: Worker pool stopped.
    Sep 14, 2021 12:49:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-14_05_45_27-4524187170090040190 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 14eeaa76-e500-489a-8a3d-82ff2bd734a0 and timestamp: 2021-09-14T12:49:50.386000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.465

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 12:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.046 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 44.889 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4apcgrbbyiugi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2423

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2423/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11980] Java GCS - Implement IO Request Count metrics (#15394)


------------------------------------------
[...truncated 346.50 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 50340803db392ee5f0c7210f39a33b6d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 14, 2021 6:47:41 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 14, 2021 6:47:42 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 14, 2021 6:47:42 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 14, 2021 6:47:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:47:46 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:47:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:47:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 14, 2021 6:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 14, 2021 6:47:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 14, 2021 6:47:54 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 14, 2021 6:47:54 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 14, 2021 6:47:54 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7311546230794284275.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-njW_bkGpz7HjBKJPPTev1JaK1i0yExhGjkvvEaOyfis.jar
    Sep 14, 2021 6:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 14, 2021 6:47:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 14, 2021 6:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash cb29fcf98b9eae1d3729108395eb5b2d37ed03a92d92391516e2b59c5ad6b5e6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yyn8-Yuerh03KRCDletbLTftA6ktkjkVFuK1nFrWteY.pb
    Sep 14, 2021 6:47:57 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 14, 2021 6:47:58 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 14, 2021 6:47:58 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 14, 2021 6:47:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 14, 2021 6:47:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-13_23_47_58-3252105525232313110?project=apache-beam-testing
    Sep 14, 2021 6:47:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-13_23_47_58-3252105525232313110
    Sep 14, 2021 6:47:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-13_23_47_58-3252105525232313110
    Sep 14, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-14T06:48:02.194Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:08.445Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.177Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.217Z: Expanding GroupByKey operations into optimizable parts.
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.258Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.335Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.363Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.385Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.696Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:09.776Z: Starting 5 workers in us-central1-a...
    Sep 14, 2021 6:48:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:32.989Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 14, 2021 6:48:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:39.806Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 6:48:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:39.836Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 14, 2021 6:48:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:48:50.079Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:49:15.074Z: Workers have started successfully.
    Sep 14, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:49:15.094Z: Workers have started successfully.
    Sep 14, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:49:42.814Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:49:42.953Z: Cleaning up.
    Sep 14, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:49:43.024Z: Stopping worker pool...
    Sep 14, 2021 6:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:52:10.924Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 14, 2021 6:52:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T06:52:10.961Z: Worker pool stopped.
    Sep 14, 2021 6:52:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-13_23_47_58-3252105525232313110 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ff2fefd-320e-44c7-a6fe-57ff750f8ed9 and timestamp: 2021-09-14T06:52:16.624000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.488

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 6:52:17 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.048 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 40.05 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 55s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/63c6nby6atcme

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2422

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2422/display/redirect?page=changes>

Changes:

[dhuntsperger] fixed broken Python tab on HCatalog IO page

[noreply] Avoid apiary submission of job graph when it is not needed. (#15458)

[noreply] [BEAM-7261] Add support for BasicSessionCredentials for AWS credentials.

[noreply] Bump dataflow java container version to beam-master-20210913 (#15506)


------------------------------------------
[...truncated 338.57 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e293a3d2c89581eb30e12c16bdf416da
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 14, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 14, 2021 12:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 14, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 14, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 14, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 14, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 14, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 14, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 14, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6578673708245297976.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-m3aFo7d7pNwdiUQ9cAtMczGU5ppzNILc62FVrD8oFDc.jar
    Sep 14, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 14, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 14, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 6f9d067de7ce905ac94723abdfe5f25fc0b59895316e6ef75d7c807e6366899d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-b50GfefOkFrJRyOr3-XyX8C1mJUxbm73XXyAfmNmiZ0.pb
    Sep 14, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 14, 2021 12:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 14, 2021 12:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 14, 2021 12:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-13_17_45_15-13702946519292313061?project=apache-beam-testing
    Sep 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-13_17_45_15-13702946519292313061
    Sep 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-13_17_45_15-13702946519292313061
    Sep 14, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-14T00:45:18.736Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 14, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:24.870Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:25.708Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:25.756Z: Expanding GroupByKey operations into optimizable parts.
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:25.912Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:25.986Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:26.008Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:26.039Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:26.390Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:26.477Z: Starting 5 workers in us-central1-a...
    Sep 14, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:45:49.951Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 14, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:46:00.128Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:46:00.170Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 14, 2021 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:46:10.463Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 14, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:46:35.334Z: Workers have started successfully.
    Sep 14, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:46:35.360Z: Workers have started successfully.
    Sep 14, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:47:01.175Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 14, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:47:01.319Z: Cleaning up.
    Sep 14, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:47:01.386Z: Stopping worker pool...
    Sep 14, 2021 12:49:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:49:32.131Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 14, 2021 12:49:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-14T00:49:32.172Z: Worker pool stopped.
    Sep 14, 2021 12:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-13_17_45_15-13702946519292313061 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b525c846-870e-4c63-b79c-b353eba80db6 and timestamp: 2021-09-14T00:49:39.386000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.871

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 14, 2021 12:49:40 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 43.504 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wyj5k7j7bxfdw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2421

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2421/display/redirect?page=changes>

Changes:

[rohde.samuel] [BEAM-12842] Add timestamp to test work item to deflake

[suztomo] [BEAM-12873] HL7v2IO: to leave schematizedData null, not empty


------------------------------------------
[...truncated 338.70 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 49dffaa09b8e1afd253a5b200b3716de
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 13, 2021 6:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 13, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 13, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 13, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 13, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 13, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 13, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 13, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1142033632284406031.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5BRHaXDDEyXIDNzk-dK8PY-kfdCHVQ4xFzY32HCbDXc.jar
    Sep 13, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 13, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 13, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 7c0eadb156e7f83781b6dfb43b3fe90bfa691bb00fdcc3ec9545cdb90048c6e3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fA6tsVbn-DeBtt-0Oz_pC_ppG7AP3MPslUXNuQBIxuM.pb
    Sep 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 13, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 13, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 13, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 13, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-13_11_45_14-4796777176590874877?project=apache-beam-testing
    Sep 13, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-13_11_45_14-4796777176590874877
    Sep 13, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-13_11_45_14-4796777176590874877
    Sep 13, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T18:45:18.397Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 13, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:26.089Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 13, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.114Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 13, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.231Z: Expanding GroupByKey operations into optimizable parts.
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.296Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.435Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.471Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.506Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.883Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:27.944Z: Starting 5 workers in us-central1-c...
    Sep 13, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:45:52.887Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 13, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T18:46:02.360Z: Autoscaling: Startup of the worker pool in zone us-central1-c reached 2 workers, but the goal was 5 workers. The service will retry. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 13, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:46:02.404Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 13, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:46:02.424Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 13, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:46:37.389Z: Workers have started successfully.
    Sep 13, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:46:37.423Z: Workers have started successfully.
    Sep 13, 2021 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:47:25.242Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 13, 2021 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:47:25.450Z: Cleaning up.
    Sep 13, 2021 6:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:47:25.530Z: Stopping worker pool...
    Sep 13, 2021 6:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:49:43.008Z: Autoscaling: Resized worker pool from 1 to 0.
    Sep 13, 2021 6:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T18:49:43.062Z: Worker pool stopped.
    Sep 13, 2021 6:49:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-13_11_45_14-4796777176590874877 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ee48ed5b-7218-4d2d-95c5-1e26a464b6e0 and timestamp: 2021-09-13T18:49:49.658000000Z:
                     Metric:                    Value:
                   read_time                    21.855
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 6:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 54.571 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wpkdsqrte3hby

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2420

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2420/display/redirect>

Changes:


------------------------------------------
[...truncated 335.71 KB...]

> Task :sdks:java:extensions:sql:testJar
Caching disabled for task ':sdks:java:extensions:sql:testJar' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:testJar (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.128 secs.
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 6,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is ab74a7458097b86331496e153b6b4789
Task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key ab74a7458097b86331496e153b6b4789
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.184 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 13, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 13, 2021 12:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 13, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 13, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 13, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 13, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1852039557803765943.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-O9Bv4xdnkJahZJ_O2D-cQHhcGjOe0L9Dco9GvB5DnUQ.jar
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 13, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash e4ffd97ad7fe463e3f45c4d01f9a0a23536808c4ca7148719cc2ff580d27a27b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5P_Zetf-Rj4_RcTQH5oKI1NoCMTKcUhxnML_WA0nons.pb
    Sep 13, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 13, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 13, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 13, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 13, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-13_05_45_12-11719669428517226292?project=apache-beam-testing
    Sep 13, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-13_05_45_12-11719669428517226292
    Sep 13, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-13_05_45_12-11719669428517226292
    Sep 13, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T12:45:16.137Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 13, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T12:45:24.799Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 13, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-13T12:45:25.799Z: Workflow failed. Causes: Project apache-beam-testing has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/24385 instances, 2/0 CPUs, 25/183671 disk GB, 0/2397 SSD disk GB, 1/288 instance groups, 1/291 managed instance groups, 1/517 instance templates, 1/614 in-use IP addresses.

    Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
    Sep 13, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T12:45:25.836Z: Cleaning up.
    Sep 13, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T12:45:25.886Z: Worker pool stopped.
    Sep 13, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T12:45:27.060Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 13, 2021 12:45:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-13_05_45_12-11719669428517226292 failed with status FAILED.
    Sep 13, 2021 12:45:32 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 13, 2021 12:45:33 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): db4cc790-e976-4cbd-af70-e06485dffd9c and timestamp: 2021-09-13T12:45:32.695000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 12:45:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 40.449 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xhnwe4kstskhe

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2419

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2419/display/redirect>

Changes:


------------------------------------------
[...truncated 336.12 KB...]
Skipping task ':sdks:java:extensions:sql:testClasses' as it has no actions.
:sdks:java:extensions:sql:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:testJar (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:testJar
Caching disabled for task ':sdks:java:extensions:sql:testJar' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:testJar (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.131 secs.
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is ab74a7458097b86331496e153b6b4789
Task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key ab74a7458097b86331496e153b6b4789
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.173 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 13, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 13, 2021 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 13, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 13, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 13, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 13, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 13, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 13, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7879367242566746536.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qKmj_qzCpRvs-LS2WaXHpClOoMP9oKEzApWFwp1ZpGE.jar
    Sep 13, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 13, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 13, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 6d24cf6da9eec3dc96f960dc8d22c9f662e42cdef3573233e24b59f415d6a97a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-bSTPbanuw9yW-WDcjSLJ9mLkLN7zVzIz4ktZ9BXWqXo.pb
    Sep 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 13, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-12_23_45_12-9724516326294111648?project=apache-beam-testing
    Sep 13, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-12_23_45_12-9724516326294111648
    Sep 13, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-12_23_45_12-9724516326294111648
    Sep 13, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T06:45:16.424Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T06:45:23.050Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-13T06:45:23.740Z: Workflow failed. Causes: Project apache-beam-testing has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/24386 instances, 2/0 CPUs, 25/183711 disk GB, 0/2397 SSD disk GB, 1/283 instance groups, 1/286 managed instance groups, 1/512 instance templates, 1/615 in-use IP addresses.

    Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
    Sep 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T06:45:23.801Z: Cleaning up.
    Sep 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T06:45:23.887Z: Worker pool stopped.
    Sep 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T06:45:25.091Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 13, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-12_23_45_12-9724516326294111648 failed with status FAILED.
    Sep 13, 2021 6:45:28 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 13, 2021 6:45:28 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 78fec180-e8b8-48b3-ba19-aaa8ed48fed6 and timestamp: 2021-09-13T06:45:28.353000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 6:45:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.059 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 35.42 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/khs3zi7l7skp4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2418

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2418/display/redirect>

Changes:


------------------------------------------
[...truncated 338.95 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 13, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 13, 2021 12:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 13, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 13, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 13, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 13, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 13, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 13, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test919607493275932945.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OTE8YSDUWBG4Sgv33rUQ44s37P6NZomVlUNtIT3-SRM.jar
    Sep 13, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 13, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 13, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 80baffd28ad82b6c94e9af3624894809a486f5e309c4d5039f92f41949d12497> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-gLr_0orYK2yU6a82JIlICaSG9eMJxNUDn5L0GUnRJJc.pb
    Sep 13, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 13, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 13, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 13, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 13, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-12_17_45_13-14486603282926067895?project=apache-beam-testing
    Sep 13, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-12_17_45_13-14486603282926067895
    Sep 13, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-12_17_45_13-14486603282926067895
    Sep 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T00:45:17.004Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 13, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:22.947Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 13, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.616Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 13, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.655Z: Expanding GroupByKey operations into optimizable parts.
    Sep 13, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.682Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 13, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.755Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 13, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.789Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 13, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:23.821Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 13, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:24.163Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 13, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:24.239Z: Starting 5 workers in us-central1-c...
    Sep 13, 2021 12:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:45:30.613Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 13, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T00:46:06.832Z: Autoscaling: Startup of the worker pool in zone us-central1-c reached 1 workers, but the goal was 5 workers. The service will retry. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 13, 2021 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:19.227Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 13, 2021 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:19.289Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 13, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:39.798Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 13, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:39.833Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 13, 2021 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:40.974Z: Workers have started successfully.
    Sep 13, 2021 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:46:41.004Z: Workers have started successfully.
    Sep 13, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-13T00:47:04.578Z: Autoscaling: Unable to reach resize target in zone us-central1-c. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 13, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:47:28.068Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 13, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:47:28.193Z: Cleaning up.
    Sep 13, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:47:28.276Z: Stopping worker pool...
    Sep 13, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:49:52.830Z: Autoscaling: Resized worker pool from 2 to 0.
    Sep 13, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-13T00:49:52.880Z: Worker pool stopped.
    Sep 13, 2021 12:49:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-12_17_45_13-14486603282926067895 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9a498a4f-8c9e-40cf-b676-44d9d4ea99fb and timestamp: 2021-09-13T00:49:58.187000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    23.894

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 13, 2021 12:49:58 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 4.566 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pmpyp6li6rni4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2417

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2417/display/redirect>

Changes:


------------------------------------------
[...truncated 333.78 KB...]
Skipping task ':sdks:java:extensions:sql:testClasses' as it has no actions.
:sdks:java:extensions:sql:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:testJar (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:testJar
Caching disabled for task ':sdks:java:extensions:sql:testJar' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:testJar (Thread[Daemon worker,5,main]) completed. Took 0.218 secs.
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is ab74a7458097b86331496e153b6b4789
Task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key ab74a7458097b86331496e153b6b4789
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Daemon worker,5,main]) completed. Took 0.157 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 12, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 12, 2021 6:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 12, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 12, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 12, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 12, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 12, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 12, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4658627405174761352.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XtODq3XsBYoVJ0x-Y_7UcmvyUjcNsw6Lqn66_Rv14z4.jar
    Sep 12, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 12, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 12, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b50d3976b1f658766884321503045358fa3f293d6b7bfeacc6a1596a98a683cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tQ05drH2WHZohDIVAwRTWPo_KT1re_6sxqFZapimg88.pb
    Sep 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 12, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-12_11_45_09-1474197715117398901?project=apache-beam-testing
    Sep 12, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-12_11_45_09-1474197715117398901
    Sep 12, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-12_11_45_09-1474197715117398901
    Sep 12, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-12T18:45:13.629Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 12, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T18:45:21.784Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 12, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-12T18:45:22.383Z: Workflow failed. Causes: Project apache-beam-testing has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/24385 instances, 2/0 CPUs, 25/183691 disk GB, 0/2397 SSD disk GB, 1/281 instance groups, 1/284 managed instance groups, 1/510 instance templates, 1/614 in-use IP addresses.

    Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
    Sep 12, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T18:45:22.416Z: Cleaning up.
    Sep 12, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T18:45:22.468Z: Worker pool stopped.
    Sep 12, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T18:45:23.839Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 12, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-12_11_45_09-1474197715117398901 failed with status FAILED.
    Sep 12, 2021 6:45:27 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 12, 2021 6:45:29 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d1c064a5-b49d-4404-aedd-c622c2ca9c36 and timestamp: 2021-09-12T18:45:27.676000000Z:
                     Metric:                    Value:
                 fields_read                      -1.0
                   read_time                       0.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 6:45:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 37.754 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ngj5tk2een2ra

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2416

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2416/display/redirect>

Changes:


------------------------------------------
[...truncated 339.28 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 12, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 12, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 12, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 12, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 12, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 12, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 12, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 12, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7486825016035298335.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SAgVxt_P5-e8wDcOUrg7FNrbWsIqJnZRAvDbV8EKUXk.jar
    Sep 12, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Sep 12, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Sep 12, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Sep 12, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 1 seconds
    Sep 12, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 12, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 9a503c998f9d4b1acab9c171e4aaaff17ea57c901ea7894659dfc58e5c37ed19> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mlA8mY-dSxrKucFx5Kqv8X6lfJAep4lGWd_Fjlw37Rk.pb
    Sep 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 12, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-12_05_45_10-3693096024963707878?project=apache-beam-testing
    Sep 12, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-12_05_45_10-3693096024963707878
    Sep 12, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-12_05_45_10-3693096024963707878
    Sep 12, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-12T12:45:14.457Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:20.860Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.657Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.708Z: Expanding GroupByKey operations into optimizable parts.
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.737Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.799Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:21.893Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:22.216Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 12, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:22.288Z: Starting 5 workers in us-central1-a...
    Sep 12, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:45:28.013Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 12, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-12T12:45:54.991Z: Autoscaling: Startup of the worker pool in zone us-central1-a reached 3 workers, but the goal was 5 workers. The service will retry. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 12, 2021 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:09.136Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 12, 2021 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:09.162Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 12, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:28.237Z: Workers have started successfully.
    Sep 12, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:28.268Z: Workers have started successfully.
    Sep 12, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:57.197Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 12, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:57.334Z: Cleaning up.
    Sep 12, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:46:57.421Z: Stopping worker pool...
    Sep 12, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:49:31.871Z: Autoscaling: Resized worker pool from 3 to 0.
    Sep 12, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T12:49:31.913Z: Worker pool stopped.
    Sep 12, 2021 12:49:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-12_05_45_10-3693096024963707878 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a8ef74cc-7836-4705-8fd5-d5941c1d45e7 and timestamp: 2021-09-12T12:49:39.153000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.982

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 12:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 47.452 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3gtlzupqhijsg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2415

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2415/display/redirect>

Changes:


------------------------------------------
[...truncated 335.06 KB...]
Skipping task ':sdks:java:extensions:sql:testClasses' as it has no actions.
:sdks:java:extensions:sql:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:testJar (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :sdks:java:extensions:sql:testJar
Caching disabled for task ':sdks:java:extensions:sql:testJar' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:testJar (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.127 secs.
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:compileTestJava FROM-CACHE
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is ab74a7458097b86331496e153b6b4789
Task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is not up-to-date because:
  No history is available.
Loaded cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key ab74a7458097b86331496e153b6b4789
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.15 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 12, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 12, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 12, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 12, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 12, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 12, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 12, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 12, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 12, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1793329628853421440.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fX0S2Ewuz9TQZPBXli5pmdUsdeKyMkXMTarrRTsp8a0.jar
    Sep 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b84fa1fe57b1ea614ab9cc6a33a8143241ccbc557c84b2807777901be7a4e118> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uE-h_lex6mFKucxqM6gUMkHMvFV8hLKAd3eQG-ek4Rg.pb
    Sep 12, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 12, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 12, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 12, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 12, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-11_23_45_08-8735805944682801155?project=apache-beam-testing
    Sep 12, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-11_23_45_08-8735805944682801155
    Sep 12, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-11_23_45_08-8735805944682801155
    Sep 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-12T06:45:12.499Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T06:45:19.075Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 12, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-12T06:45:19.752Z: Workflow failed. Causes: Project apache-beam-testing has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/24365 instances, 2/0 CPUs, 25/184816 disk GB, 0/2397 SSD disk GB, 1/274 instance groups, 1/277 managed instance groups, 1/503 instance templates, 1/594 in-use IP addresses.

    Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
    Sep 12, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T06:45:19.787Z: Cleaning up.
    Sep 12, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T06:45:19.835Z: Worker pool stopped.
    Sep 12, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T06:45:20.999Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 12, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-11_23_45_08-8735805944682801155 failed with status FAILED.
    Sep 12, 2021 6:45:25 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 12, 2021 6:45:25 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bdcfb102-e766-4104-82b0-a27b774a0268 and timestamp: 2021-09-12T06:45:25.230000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 6:45:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 35.07 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/s7puq6ro3in2a

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2414

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2414/display/redirect>

Changes:


------------------------------------------
[...truncated 337.78 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 12, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 12, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 12, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 12, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 12, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test826078511669609757.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TkaelBOxE8Li7v4Bzbt5z3ny7wOv9q-y1TxGbvxZyTw.jar
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 12, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 29035ee44a670599ab55db3876558b11d682351762a83b4bce6728ddec277cde> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KQNe5EpnBZmrVds4dlWLEdaCNRdiqDtLzmco3ewnfN4.pb
    Sep 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-11_17_45_09-11099528075843360802?project=apache-beam-testing
    Sep 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-11_17_45_09-11099528075843360802
    Sep 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-11_17_45_09-11099528075843360802
    Sep 12, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-12T00:45:12.848Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:19.634Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.362Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.400Z: Expanding GroupByKey operations into optimizable parts.
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.426Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.481Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.514Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.543Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.849Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:20.942Z: Starting 5 workers in us-central1-c...
    Sep 12, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:41.563Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 12, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:50.887Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 12, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:45:50.920Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 12, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:01.171Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 12, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:26.260Z: Workers have started successfully.
    Sep 12, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:26.288Z: Workers have started successfully.
    Sep 12, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:54.886Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 12, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:55.018Z: Cleaning up.
    Sep 12, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:46:55.122Z: Stopping worker pool...
    Sep 12, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:49:20.764Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 12, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-12T00:49:20.822Z: Worker pool stopped.
    Sep 12, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-11_17_45_09-11099528075843360802 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 06ecd640-683c-4082-be1f-d93fa9fa0df7 and timestamp: 2021-09-12T00:49:26.199000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.133

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 12, 2021 12:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 35.531 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ndq7hxkuciriq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2413

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2413/display/redirect>

Changes:


------------------------------------------
[...truncated 337.29 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 11, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 11, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 11, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 11, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 11, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 11, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 11, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3499213969173461346.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-D9zJaYxvS8tJzo-57Dj1FSezsatwIbrwZVy9-19IHcg.jar
    Sep 11, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 11, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 11, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 66cbbca901a3a4057e1535b113fa75c39e318520d2ddf2eeabffc9b0b05c10cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Zsu8qQGjpAV-FTWxE_p1w54xhSDS3fLuq__JsLBcEM8.pb
    Sep 11, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 11, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 11, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 11, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-11_11_45_16-18074417737114682408?project=apache-beam-testing
    Sep 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-11_11_45_16-18074417737114682408
    Sep 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-11_11_45_16-18074417737114682408
    Sep 11, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-11T18:45:19.965Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 11, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:25.892Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.748Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.786Z: Expanding GroupByKey operations into optimizable parts.
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.815Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.880Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.908Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:26.944Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:27.292Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:27.371Z: Starting 5 workers in us-central1-a...
    Sep 11, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:45:33.088Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 11, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:46:11.316Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 11, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:46:37.740Z: Workers have started successfully.
    Sep 11, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:46:37.771Z: Workers have started successfully.
    Sep 11, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:47:07.105Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:47:07.256Z: Cleaning up.
    Sep 11, 2021 6:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:47:07.333Z: Stopping worker pool...
    Sep 11, 2021 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:49:30.966Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 11, 2021 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T18:49:31.005Z: Worker pool stopped.
    Sep 11, 2021 6:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-11_11_45_16-18074417737114682408 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 45ef05c3-6710-44c2-9bcd-20e799557726 and timestamp: 2021-09-11T18:49:36.222000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.422

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 6:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 39.649 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/crqdafydtueqi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2412

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2412/display/redirect>

Changes:


------------------------------------------
[...truncated 337.65 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 11, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 11, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 11, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 11, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 11, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 11, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 11, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 11, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5935111599173117703.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PQfxmt2I_IHFUE2JE4FhArba1SRch9g5yH-KJxFzNJM.jar
    Sep 11, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 11, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 11, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash dac468a22a4b28596f2b93ef724021c417aa3f2995da5dd17d20d9f2455c1889> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2sRooipLKFlvK5PvckAhxBeqPymV2l3RfSDZ8kVcGIk.pb
    Sep 11, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 11, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 11, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 11, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 11, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-11_05_45_13-15054943979245899383?project=apache-beam-testing
    Sep 11, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-11_05_45_13-15054943979245899383
    Sep 11, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-11_05_45_13-15054943979245899383
    Sep 11, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-11T12:45:18.653Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 11, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.036Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.791Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.831Z: Expanding GroupByKey operations into optimizable parts.
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.856Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.929Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.955Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:25.996Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:26.293Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:26.363Z: Starting 5 workers in us-central1-a...
    Sep 11, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:45:56.341Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 11, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-11T12:45:58.238Z: Autoscaling: Startup of the worker pool in zone us-central1-a reached 1 workers, but the goal was 5 workers. The service will retry. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 11, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-11T12:46:10.736Z: Startup of the worker pool in zone us-central1-a failed to bring up any of the desired 5 workers. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 11, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-11T12:46:10.762Z: Workflow failed.
    Sep 11, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:46:10.827Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:46:10.898Z: Cleaning up.
    Sep 11, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:46:10.957Z: Stopping worker pool...
    Sep 11, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T12:46:27.073Z: Worker pool stopped.
    Sep 11, 2021 12:46:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-11_05_45_13-15054943979245899383 failed with status FAILED.
    Sep 11, 2021 12:46:33 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 11, 2021 12:46:34 PM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9fd9c040-7698-4ee3-86bd-124c76c3ebce and timestamp: 2021-09-11T12:46:33.743000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 12:46:34 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 1 mins 41.435 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 2m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wchqsqip67p36

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2411

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2411/display/redirect>

Changes:


------------------------------------------
[...truncated 336.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 11, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 11, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 11, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 11, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 11, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test849136224923080782.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KOXGO-JrQwQGgsriOgYXMQUQ9hZx3iQnv-N2ITUot1w.jar
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 7edfb895ff05a07d75c6b91515f90393a6c53b1105d071a73e0b344dca5bca9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ft-4lf8FoH11xrkVFfkDk6bFOxEF0HGnPgs0Tcpbyp0.pb
    Sep 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 11, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-10_23_45_10-11964116519564220217?project=apache-beam-testing
    Sep 11, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-10_23_45_10-11964116519564220217
    Sep 11, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-10_23_45_10-11964116519564220217
    Sep 11, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-11T06:45:14.184Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:20.369Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.102Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.134Z: Expanding GroupByKey operations into optimizable parts.
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.171Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.245Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.273Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.308Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.675Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:21.758Z: Starting 5 workers in us-central1-a...
    Sep 11, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:52.584Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 11, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:45:58.250Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 11, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:46:23.794Z: Workers have started successfully.
    Sep 11, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:46:23.829Z: Workers have started successfully.
    Sep 11, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:46:51.657Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:46:51.807Z: Cleaning up.
    Sep 11, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:46:51.895Z: Stopping worker pool...
    Sep 11, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:49:14.040Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 11, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T06:49:14.082Z: Worker pool stopped.
    Sep 11, 2021 6:49:20 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-10_23_45_10-11964116519564220217 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c9e8fc3e-69d6-4945-890f-662096f22848 and timestamp: 2021-09-11T06:49:20.691000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      6.96

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 6:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 28.468 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 0s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/usu6zowfkx4d4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2410

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2410/display/redirect?page=changes>

Changes:

[heejong] [BEAM-12805] Fix XLang CombinePerKey test by explicitly assigning the

[BenWhitehead] [BEAM-8376] Google Cloud Firestore Connector - Add handling for

[noreply] Decreasing peak memory usage for beam.TupleCombineFn (#15494)

[noreply] [BEAM-12802] Add support for prefetch through data layers down through

[noreply] [BEAM-11097] Add implementation of side input cache (#15483)


------------------------------------------
[...truncated 341.54 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9bf3ac3215333546daf9554f53be71e8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 11, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 11, 2021 12:46:23 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 11, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 11, 2021 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:46:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:46:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:46:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 11, 2021 12:46:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 11, 2021 12:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 11, 2021 12:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 11, 2021 12:46:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 11, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 11, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-YtvkOIsfQYfFzG89aVBO_8oNHoiij9SynwUJLdMkn5A.jar
    Sep 11, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5889603143190849404.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-4ObVKigR7akgH0SF93qu6ZbuTm3S_tjAlaGdLhLHDeI.jar
    Sep 11, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 11, 2021 12:46:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 11, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 72b471ff8a43d5c391d190cc2fbd851d2d132b298633fb71733ca0c0a65a4a06> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-crRx_4pD1cOR0ZDML72FHS0TKymGM_txczygwKZaSgY.pb
    Sep 11, 2021 12:46:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 11, 2021 12:46:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 11, 2021 12:46:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 11, 2021 12:46:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 11, 2021 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-10_17_46_37-7221468495849970836?project=apache-beam-testing
    Sep 11, 2021 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-10_17_46_37-7221468495849970836
    Sep 11, 2021 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-10_17_46_37-7221468495849970836
    Sep 11, 2021 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-11T00:46:41.590Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:52.987Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:53.820Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:53.855Z: Expanding GroupByKey operations into optimizable parts.
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:53.895Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:53.950Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:53.986Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:54.015Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:54.325Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:46:54.392Z: Starting 5 workers in us-central1-c...
    Sep 11, 2021 12:47:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:47:14.690Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 11, 2021 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-11T00:47:29.879Z: Startup of the worker pool in zone us-central1-c failed to bring up any of the desired 5 workers. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2500.0 in region us-central1.
    Sep 11, 2021 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-11T00:47:29.917Z: Workflow failed.
    Sep 11, 2021 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:47:29.992Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 11, 2021 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:47:30.080Z: Cleaning up.
    Sep 11, 2021 12:47:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:47:30.180Z: Stopping worker pool...
    Sep 11, 2021 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-11T00:47:48.900Z: Worker pool stopped.
    Sep 11, 2021 12:47:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-10_17_46_37-7221468495849970836 failed with status FAILED.
    Sep 11, 2021 12:47:58 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 11, 2021 12:47:59 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3f7d9fed-bce0-4e58-b811-124514ebe7d9 and timestamp: 2021-09-11T00:47:58.903000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 11, 2021 12:47:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 1 mins 40.845 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 3m 37s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/kd4wmcmwdfh5w

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2409

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2409/display/redirect?page=changes>

Changes:

[noreply] Added type annotations to some combiners missing it. (#15414)

[noreply] [BEAM-12634] JmsIO auto scaling feature (#15464)

[noreply] [BEAM-12662] Get Flink version from cluster. (#15223)

[noreply] Port changes from Pub/Sub Lite to beam (#15418)


------------------------------------------
[...truncated 412.56 KB...]
Gradle Test Executor 12 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 38dcad72c650c7c44f736b7038162c5a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 12'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 12'
Successfully started process 'Gradle Test Executor 12'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 10, 2021 7:00:24 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 10, 2021 7:00:25 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 10, 2021 7:00:26 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 10, 2021 7:00:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 7:00:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 7:00:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 7:00:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 7:00:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 7:00:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 10, 2021 7:00:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 10, 2021 7:00:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 10, 2021 7:00:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 10, 2021 7:00:35 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-h_pN1a12Kq6Q7UgG0NwuMuy2KLX9eKEV-XpQQIHwHqY.jar
    Sep 10, 2021 7:00:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4898916112002885638.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DewQy2pSPyoXzN2QRqIxBShnnP0VTsZ0oI-sq4hpDSg.jar
    Sep 10, 2021 7:00:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 10, 2021 7:00:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 10, 2021 7:00:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 89bc71e6dda8b7e8097504b810cb791447025affc285ccf6110fef6d9d1663bb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ibxx5t2ot-gJdQS4EMt5FEcCWv_Chcz2EQ_vbZ0WY7s.pb
    Sep 10, 2021 7:00:38 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 10, 2021 7:00:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 10, 2021 7:00:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 10, 2021 7:00:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 10, 2021 7:00:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-10_12_00_39-2700056579921925071?project=apache-beam-testing
    Sep 10, 2021 7:00:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-10_12_00_39-2700056579921925071
    Sep 10, 2021 7:00:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-10_12_00_39-2700056579921925071
    Sep 10, 2021 7:00:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-10T19:00:42.835Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 10, 2021 7:00:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:50.439Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 10, 2021 7:00:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.364Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.416Z: Expanding GroupByKey operations into optimizable parts.
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.453Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.512Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.555Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:51.576Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:52.054Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 7:00:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:00:52.145Z: Starting 5 workers in us-central1-c...
    Sep 10, 2021 7:01:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:14.071Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 10, 2021 7:01:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:26.365Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 7:01:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:26.406Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 10, 2021 7:01:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:36.685Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 7:02:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:59.773Z: Workers have started successfully.
    Sep 10, 2021 7:02:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:01:59.799Z: Workers have started successfully.
    Sep 10, 2021 7:02:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:02:31.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 7:02:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:02:31.792Z: Cleaning up.
    Sep 10, 2021 7:02:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:02:31.870Z: Stopping worker pool...
    Sep 10, 2021 7:04:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:04:50.340Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 10, 2021 7:04:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T19:04:50.400Z: Worker pool stopped.
    Sep 10, 2021 7:04:57 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-10_12_00_39-2700056579921925071 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 54e62b3e-2a76-4792-b7bd-5d2ec120f94e and timestamp: 2021-09-10T19:04:57.879000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.65

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 7:04:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 12 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 37.548 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 20m 29s
152 actionable tasks: 151 executed, 1 from cache

Publishing build scan...
https://gradle.com/s/tk2k32ltybuao

Stopped 11 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2408

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2408/display/redirect?page=changes>

Changes:

[noreply] Register MapCoder, some comments/cleanup. (#15471)

[noreply] [BEAM-12588] Multimap user state proto changes (#15473)


------------------------------------------
[...truncated 339.92 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 21766bf14437d8b28e4d907b55c035fa
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 10, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 10, 2021 12:45:18 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 10, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 10, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 10, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 10, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 10, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-h_pN1a12Kq6Q7UgG0NwuMuy2KLX9eKEV-XpQQIHwHqY.jar
    Sep 10, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3445344613483535286.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-it6_M6HqRPgDuwLd701veraidtRKarLwKzLVaHJyHXo.jar
    Sep 10, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 10, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 10, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash 2172ed740860633236368d35a3344a3146d311ca4e57b47c5184859978358c55> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IXLtdAhgYzI2No01ozRKMUbTEcpOV7R8UYSFmXg1jFU.pb
    Sep 10, 2021 12:45:32 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 10, 2021 12:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 10, 2021 12:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 10, 2021 12:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 10, 2021 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-10_05_45_33-7298934043500527143?project=apache-beam-testing
    Sep 10, 2021 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-10_05_45_33-7298934043500527143
    Sep 10, 2021 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-10_05_45_33-7298934043500527143
    Sep 10, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-10T12:45:37.036Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 10, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:42.646Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 10, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.478Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 10, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.514Z: Expanding GroupByKey operations into optimizable parts.
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.551Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.642Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.681Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:43.708Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:44.189Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:45:44.623Z: Starting 5 workers in us-central1-c...
    Sep 10, 2021 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:14.454Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 10, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:19.012Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:19.045Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 10, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:29.311Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:55.491Z: Workers have started successfully.
    Sep 10, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:46:55.512Z: Workers have started successfully.
    Sep 10, 2021 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:47:28.475Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:47:28.631Z: Cleaning up.
    Sep 10, 2021 12:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:47:28.719Z: Stopping worker pool...
    Sep 10, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:49:42.673Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 10, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T12:49:42.708Z: Worker pool stopped.
    Sep 10, 2021 12:49:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-10_05_45_33-7298934043500527143 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5c3e76f6-6a83-4a74-b1cc-1a4147eb5659 and timestamp: 2021-09-10T12:49:49.167000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.999

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 12:49:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 35.391 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/o657fyczukgiq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2407

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2407/display/redirect?page=changes>

Changes:

[noreply] [BEAM-5097] Increment counter for "small words" in go SDK example


------------------------------------------
[...truncated 341.12 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is a9f5d2013a02d9d2f955346998a3c933
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 10, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 10, 2021 6:45:30 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 10, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 10, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 6:45:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@213658433]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2038704487]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 6:45:36 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 10, 2021 6:45:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 10, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-ucAidJrqxK96xuFPkyoKLCyoB4CvF-dHe3bPkncTQpU.jar
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3787861847279080959.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-tCKlsxRDPI6hGVkUhmUCo3kAJe0jNM6i2QbV-S2rxzk.jar
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 10, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104002 bytes, hash 36db5eb376d153c028f6969b0a525a168f9337732b4521a030613fd662cd1b8f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Nttes3bRU8Ao9pabClJaFo-TN3MrRSGgMGE_1mLNG48.pb
    Sep 10, 2021 6:45:44 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 10, 2021 6:45:44 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 10, 2021 6:45:44 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 10, 2021 6:45:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 10, 2021 6:45:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-09_23_45_44-6102199205758719663?project=apache-beam-testing
    Sep 10, 2021 6:45:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-09_23_45_44-6102199205758719663
    Sep 10, 2021 6:45:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-09_23_45_44-6102199205758719663
    Sep 10, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-10T06:45:48.778Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.003Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.783Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.827Z: Expanding GroupByKey operations into optimizable parts.
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.869Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.960Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:57.991Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 10, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:58.023Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 10, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:58.501Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:46:58.602Z: Starting 5 workers in us-central1-c...
    Sep 10, 2021 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:47:08.308Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 10, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:47:33.516Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:47:33.546Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 10, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:47:43.831Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 10, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:48:07.331Z: Workers have started successfully.
    Sep 10, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:48:07.422Z: Workers have started successfully.
    Sep 10, 2021 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:48:34.915Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 10, 2021 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:48:35.453Z: Cleaning up.
    Sep 10, 2021 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:48:35.572Z: Stopping worker pool...
    Sep 10, 2021 6:51:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:51:01.913Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 10, 2021 6:51:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T06:51:01.986Z: Worker pool stopped.
    Sep 10, 2021 6:51:07 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-09_23_45_44-6102199205758719663 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3010cbc0-0ae3-4de1-9687-671292c97ac1 and timestamp: 2021-09-10T06:51:07.724000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.235

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 6:51:08 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 5 mins 43.06 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 44s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ckteho7wqctmy

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2406

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2406/display/redirect?page=changes>

Changes:

[ruwan.lambrichts] Clarify additional_bq_parameters argument

[noreply] Fix broken 'differences from pandas' link

[noreply] Added GroupBy row in Aggregation table.

[Etienne Chauchot] [BEAM-5172] Temporary ignore testSplit and testSizes tests waiting for a

[samuelw] [BEAM-12740] Remove matching to filter files when renaming gcs files in

[noreply] [BEAM-3304] Helper functions for triggers (#15430)

[esert] Bump a throttling counter on BigQueryRead retries due to


------------------------------------------
[...truncated 346.51 KB...]
Caching disabled for task ':sdks:java:extensions:sql:testJar' because:
  Caching has not been enabled for the task
Task ':sdks:java:extensions:sql:testJar' is not up-to-date because:
  No history is available.
:sdks:java:extensions:sql:testJar (Thread[Daemon worker,5,main]) completed. Took 0.133 secs.
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:compileTestJava
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is 8b54be572e3a891ec50ae23f0cd874b6
Task ':sdks:java:extensions:sql:perf-tests:compileTestJava' is not up-to-date because:
  No history is available.
The input changes require a full rebuild for incremental task ':sdks:java:extensions:sql:perf-tests:compileTestJava'.
Full recompilation is required because no incremental change information is available. This is usually caused by clean builds or changing compiler arguments.
Compiling with JDK Java compiler API.
Created classpath snapshot for incremental compilation in 0.125 secs. 3324 duplicate classes found in classpath (see all with --debug).
Stored cache entry for task ':sdks:java:extensions:sql:perf-tests:compileTestJava' with cache key 8b54be572e3a891ec50ae23f0cd874b6
:sdks:java:extensions:sql:perf-tests:compileTestJava (Thread[Daemon worker,5,main]) completed. Took 1.934 secs.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 52158865a6a1134246d7f0cd77fcdbe5
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 10, 2021 12:46:32 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 10, 2021 12:46:33 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 10, 2021 12:46:34 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 10, 2021 12:46:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:46:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:38 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:46:38 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:46:38 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:38 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1386607672]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@725804086]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 10, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 10, 2021 12:46:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 10, 2021 12:46:44 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 10, 2021 12:46:44 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-ucAidJrqxK96xuFPkyoKLCyoB4CvF-dHe3bPkncTQpU.jar
    Sep 10, 2021 12:46:44 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3301184806270174235.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6EXH181S_JuJL-rLXn0F5Zi7ecJnf0DFyDuy8-YYA14.jar
    Sep 10, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 10, 2021 12:46:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 10, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash 286996d6b9ece45dc905ba2aa66565d4ecb08fdf6684b03495788253c30363d0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KGmW1rns5F3JBboqpmVl1Oywj99mhLA0lXiCU8MDY9A.pb
    Sep 10, 2021 12:46:47 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 10, 2021 12:46:47 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 10, 2021 12:46:47 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 10, 2021 12:46:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 10, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-09_17_46_48-13252814087671969139?project=apache-beam-testing
    Sep 10, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-09_17_46_48-13252814087671969139
    Sep 10, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-09_17_46_48-13252814087671969139
    Sep 10, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-10T00:46:52.073Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 10, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T00:46:59.051Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 10, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2021-09-10T00:47:00.326Z: Workflow failed. Causes: Project apache-beam-testing has insufficient quota(s) to execute this workflow with 1 instances in region us-central1. Quota summary (required/available): 1/19495 instances, 2/0 CPUs, 25/247716 disk GB, 0/2397 SSD disk GB, 1/272 instance groups, 1/275 managed instance groups, 1/501 instance templates, 1/724 in-use IP addresses.

    Please see https://cloud.google.com/compute/docs/resource-quotas about requesting more quota.
    Sep 10, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T00:47:00.409Z: Cleaning up.
    Sep 10, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T00:47:00.504Z: Worker pool stopped.
    Sep 10, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-10T00:47:01.762Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 10, 2021 12:47:06 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-09_17_46_48-13252814087671969139 failed with status FAILED.
    Sep 10, 2021 12:47:06 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT
    Sep 10, 2021 12:47:06 AM org.apache.beam.sdk.testutils.metrics.MetricsReader getCounterMetric
    SEVERE: Failed to get metric fields_read, from namespace org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): db9d6a6b-3f3e-419b-a5af-88970102aeda and timestamp: 2021-09-10T00:47:06.309000000Z:
                     Metric:                    Value:
                   read_time                       0.0
                 fields_read                      -1.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 10, 2021 12:47:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 38.058 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 2m 44s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/twy4oukgudnbu

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2405

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2405/display/redirect?page=changes>

Changes:

[kawaigin] [BEAM-10708] Support streaming cache in beam_sql magic

[Luke Cwik] [BEAM-12769] Fix typo in test class name, CLass -> Class


------------------------------------------
[...truncated 337.77 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f2b763118ffad8fa1c25bbf167741317
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 09, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 09, 2021 6:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 09, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1317539897]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1789235794]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 09, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 09, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 09, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-E-KPx8odQzZ4kFtSdYrJ4EpS3UhAdfKiSlspRWHWbTM.jar
    Sep 09, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4032344636116975973.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Otr6YtIn8OvieoMlgDZEcUCuafRgZZmHLZEb8Q5U9Mw.jar
    Sep 09, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 09, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 09, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash 8f9d6106d50493f00fb68aae6b8182fc5013ed9896169621264f26680e5ae6f0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-j51hBtUEk_APtoqua4GC_FAT7ZiWFpYhJk8maA5a5vA.pb
    Sep 09, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-09_11_45_09-9517891140804248230?project=apache-beam-testing
    Sep 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-09_11_45_09-9517891140804248230
    Sep 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-09_11_45_09-9517891140804248230
    Sep 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-09T18:45:13.188Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:20.062Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.005Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.048Z: Expanding GroupByKey operations into optimizable parts.
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.109Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.187Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.220Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.261Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.660Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:21.747Z: Starting 5 workers in us-central1-c...
    Sep 09, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:44.295Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 09, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:56.448Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:45:56.470Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 09, 2021 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:46:06.723Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:46:30.417Z: Workers have started successfully.
    Sep 09, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:46:30.444Z: Workers have started successfully.
    Sep 09, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:47:00.619Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:47:00.836Z: Cleaning up.
    Sep 09, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:47:00.962Z: Stopping worker pool...
    Sep 09, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:49:22.787Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 09, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T18:49:22.827Z: Worker pool stopped.
    Sep 09, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-09_11_45_09-9517891140804248230 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f0c2774c-b07d-468c-a595-fad1d2771713 and timestamp: 2021-09-09T18:49:37.316000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.656

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 46.536 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a4hzz53d6tcxk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2404

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2404/display/redirect>

Changes:


------------------------------------------
[...truncated 338.28 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f2b763118ffad8fa1c25bbf167741317
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 09, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 09, 2021 12:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 09, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 09, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1317539897]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1789235794]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 09, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 09, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 09, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-E-KPx8odQzZ4kFtSdYrJ4EpS3UhAdfKiSlspRWHWbTM.jar
    Sep 09, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8725071349581070517.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zNV2c3gwees571su6TyCHDv9tDoPuabHg473JHoE_Ds.jar
    Sep 09, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 09, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 09, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash afa33cda3f3427844c94243b0ba3a45c77c2e7c3b3e47ff7bf2d687657b585d8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-r6M82j80J4RMlCQ7C6OkXHfC58Oz5H_3vy1odle1hdg.pb
    Sep 09, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 09, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 09, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 09, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 09, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-09_05_45_12-5706833399107965069?project=apache-beam-testing
    Sep 09, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-09_05_45_12-5706833399107965069
    Sep 09, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-09_05_45_12-5706833399107965069
    Sep 09, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-09T12:45:15.804Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:23.877Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.701Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.750Z: Expanding GroupByKey operations into optimizable parts.
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.782Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.849Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.891Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:24.923Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:25.434Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:25.538Z: Starting 5 workers in us-central1-c...
    Sep 09, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:45:43.483Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 09, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:46:10.768Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 12:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:46:35.562Z: Workers have started successfully.
    Sep 09, 2021 12:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:46:35.643Z: Workers have started successfully.
    Sep 09, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:47:07.055Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:47:07.194Z: Cleaning up.
    Sep 09, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:47:07.266Z: Stopping worker pool...
    Sep 09, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:49:26.442Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 09, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T12:49:26.486Z: Worker pool stopped.
    Sep 09, 2021 12:49:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-09_05_45_12-5706833399107965069 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2040bd2f-fac5-432f-8d03-598d69026ea5 and timestamp: 2021-09-09T12:49:33.092000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.121

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 12:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 41.233 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/tj2hbwwkb4ssi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2403

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2403/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15480: [BEAM-12356] Make sure DatasetService is

[noreply] [BEAM-11981] Java Bigtable - Implement IO Request Count metrics (#15342)

[noreply] [BEAM-12834] Improve Go SDK cross-language documentation and API.


------------------------------------------
[...truncated 360.87 KB...]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 09, 2021 6:48:37 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 09, 2021 6:48:38 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 09, 2021 6:48:39 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 09, 2021 6:48:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:48:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:48:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:48:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:48:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1317539897]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1789235794]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 6:48:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 6:48:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 09, 2021 6:48:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 09, 2021 6:48:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 09, 2021 6:48:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 09, 2021 6:48:48 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-E-KPx8odQzZ4kFtSdYrJ4EpS3UhAdfKiSlspRWHWbTM.jar
    Sep 09, 2021 6:48:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4102403725535759715.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YuEt9d4aY18rG-i3uAL0CA2cr-0Z5YBkJMJ3DvNaBa8.jar
    Sep 09, 2021 6:48:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Sep 09, 2021 6:48:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Sep 09, 2021 6:48:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-4bF8QpaxkWlkdPssX5OV0S9rIzFuRU0pojrN2fBwjjk.jar
    Sep 09, 2021 6:48:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Sep 09, 2021 6:48:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Sep 09, 2021 6:48:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 09, 2021 6:48:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash 571bfc393a8d0f6b3a33b7d0823acafed0530022247a92f96ee8625f31fccaee> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Vxv8OTqND2s6M7fQgjrK_tBTACIkepL5buhiXzH8yu4.pb
    Sep 09, 2021 6:48:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 09, 2021 6:48:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 09, 2021 6:48:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 09, 2021 6:48:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 09, 2021 6:48:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-08_23_48_52-5219237452619354559?project=apache-beam-testing
    Sep 09, 2021 6:48:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-08_23_48_52-5219237452619354559
    Sep 09, 2021 6:48:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-08_23_48_52-5219237452619354559
    Sep 09, 2021 6:48:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-09T06:48:55.523Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:01.268Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:01.914Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:01.974Z: Expanding GroupByKey operations into optimizable parts.
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.007Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.082Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.110Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 09, 2021 6:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.131Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 09, 2021 6:49:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.500Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 6:49:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:02.580Z: Starting 5 workers in us-central1-a...
    Sep 09, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:12.854Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 09, 2021 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-09T06:49:39.926Z: Autoscaling: Startup of the worker pool in zone us-central1-a reached 2 workers, but the goal was 5 workers. The service will retry. QUOTA_EXCEEDED: Quota 'CPUS' exceeded.  Limit: 2000.0 in region us-central1.
    Sep 09, 2021 6:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:54.315Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 6:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:49:54.337Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 09, 2021 6:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:50:14.937Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 6:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:50:14.989Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 09, 2021 6:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:50:16.374Z: Workers have started successfully.
    Sep 09, 2021 6:50:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:50:16.412Z: Workers have started successfully.
    Sep 09, 2021 6:50:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:50:48.438Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 6:51:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:51:01.460Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 6:51:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:51:01.603Z: Cleaning up.
    Sep 09, 2021 6:51:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:51:01.681Z: Stopping worker pool...
    Sep 09, 2021 6:53:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:53:22.721Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 09, 2021 6:53:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T06:53:22.765Z: Worker pool stopped.
    Sep 09, 2021 6:53:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-08_23_48_52-5219237452619354559 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b4a6ad67-4b8d-4500-b63d-649a938374a1 and timestamp: 2021-09-09T06:53:29.738000000Z:
                     Metric:                    Value:
                   read_time                     22.55
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 6:53:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 58.44 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 10s
152 actionable tasks: 112 executed, 40 from cache

Publishing build scan...
https://gradle.com/s/42gyfcvvqbb7y

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2402

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2402/display/redirect>

Changes:


------------------------------------------
[...truncated 351.02 KB...]


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@138530218]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 09, 2021 12:45:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 09, 2021 12:45:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 09, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 09, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-avG4QDSZYig5aaU1dJDVNXG4awWsT7XhuC5e-Ob-kdc.jar
    Sep 09, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3013290146853345367.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LVPnf-GhY1mCzwb8HXhG26nUAADpCXnjI97UN5oq5qA.jar
    Sep 09, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Sep 09, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 09, 2021 12:45:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 09, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104003 bytes, hash 2a665889f596ef4ef416cec14c0d8590b2ea4de37720055c36172e588687465d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KmZYifWW7070Fs7BTA2FkLLqTeN3IAVcNhcuWIaHRl0.pb
    Sep 09, 2021 12:45:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 09, 2021 12:45:53 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:625)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:264)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:383)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1900(InstantiatingGrpcChannelProvider.java:82)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:239)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:249)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:205)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:136)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 09, 2021 12:45:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 09, 2021 12:45:53 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 09, 2021 12:45:54 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 09, 2021 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-08_17_45_54-9044668124243813912?project=apache-beam-testing
    Sep 09, 2021 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-08_17_45_54-9044668124243813912
    Sep 09, 2021 12:45:55 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-08_17_45_54-9044668124243813912
    Sep 09, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-09T00:45:57.759Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 09, 2021 12:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:04.041Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.024Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.055Z: Expanding GroupByKey operations into optimizable parts.
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.082Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.133Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.160Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 09, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.184Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 09, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.548Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:05.638Z: Starting 5 workers in us-central1-a...
    Sep 09, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:13.056Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 09, 2021 12:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:46:49.273Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 09, 2021 12:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:47:14.824Z: Workers have started successfully.
    Sep 09, 2021 12:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:47:14.848Z: Workers have started successfully.
    Sep 09, 2021 12:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:47:43.436Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 09, 2021 12:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:47:43.578Z: Cleaning up.
    Sep 09, 2021 12:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:47:43.688Z: Stopping worker pool...
    Sep 09, 2021 12:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:50:06.553Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 09, 2021 12:50:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-09T00:50:06.606Z: Worker pool stopped.
    Sep 09, 2021 12:50:13 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-08_17_45_54-9044668124243813912 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3f64b9b3-c500-4886-ac1e-9cc5e6bcd348 and timestamp: 2021-09-09T00:50:13.488000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.131

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 09, 2021 12:50:14 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.043 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.048 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 40.737 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 52s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/kzmzobvcrsykc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2401

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2401/display/redirect?page=changes>

Changes:

[suztomo] [BEAM-11205] Upgrading the Libraries BOM to v22

[randomstep] [BEAM-12708] Bump arrow-memory-netty

[Etienne Chauchot] [BEAM-12153] implement GroupByKey with CombinePerKey with Concatenate

[Etienne Chauchot] [BEAM-11023] Increase memory in SS Validates runner tests to avoid OOM

[vincent.marquez] [BEAM-9008] adds CassandraIO.readAll

[Etienne Chauchot] [BEAM-12727] extract Concatenate CombineFn to runner-core module to

[noreply] Add display data for JdbcIO.write (#15460)


------------------------------------------
[...truncated 368.90 KB...]


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731892915]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 6:58:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 08, 2021 6:58:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 08, 2021 6:58:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 08, 2021 6:58:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 08, 2021 6:58:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 08, 2021 6:58:41 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-avG4QDSZYig5aaU1dJDVNXG4awWsT7XhuC5e-Ob-kdc.jar
    Sep 08, 2021 6:58:43 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8441147360364468252.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kwkXuvBrNImCSQycrQ6mtyejOx2Nxfo-AMJPN_XyHOk.jar
    Sep 08, 2021 6:58:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Sep 08, 2021 6:58:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 5 seconds
    Sep 08, 2021 6:58:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 08, 2021 6:58:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104000 bytes, hash a3f86994385fa585e082d4f97298ce1851b14fdef302fcb3efbc0cbff0caafb6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-o_hplDhfpYXggtT5cpjOGFGxT97zAvyz77wMv_DKr7Y.pb
    Sep 08, 2021 6:58:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 08, 2021 6:58:52 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:625)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:264)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:383)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1900(InstantiatingGrpcChannelProvider.java:82)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:239)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:249)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:205)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:136)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 08, 2021 6:58:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 08, 2021 6:58:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 08, 2021 6:58:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 08, 2021 6:58:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-08_11_58_53-12780752956074070003?project=apache-beam-testing
    Sep 08, 2021 6:58:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-08_11_58_53-12780752956074070003
    Sep 08, 2021 6:58:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-08_11_58_53-12780752956074070003
    Sep 08, 2021 6:58:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-08T18:58:56.367Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.058Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.807Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.839Z: Expanding GroupByKey operations into optimizable parts.
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.866Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.923Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.951Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:03.974Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:04.312Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 6:59:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:04.378Z: Starting 5 workers in us-central1-c...
    Sep 08, 2021 6:59:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:27.905Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 08, 2021 6:59:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T18:59:54.430Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 08, 2021 7:00:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:00:20.630Z: Workers have started successfully.
    Sep 08, 2021 7:00:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:00:20.660Z: Workers have started successfully.
    Sep 08, 2021 7:00:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:00:53.302Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 7:00:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:00:53.423Z: Cleaning up.
    Sep 08, 2021 7:00:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:00:53.524Z: Stopping worker pool...
    Sep 08, 2021 7:03:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:03:09.283Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 08, 2021 7:03:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T19:03:09.321Z: Worker pool stopped.
    Sep 08, 2021 7:03:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-08_11_58_53-12780752956074070003 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a5d1b156-b9b7-4405-a7cb-07986c614330 and timestamp: 2021-09-08T19:03:16.708000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.564

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 7:03:19 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.05 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.068 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 5 mins 10.973 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 13m 50s
152 actionable tasks: 112 executed, 40 from cache

Publishing build scan...
https://gradle.com/s/i2xjblbztihtu

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2400

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2400/display/redirect>

Changes:


------------------------------------------
[...truncated 349.13 KB...]
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 08, 2021 12:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 08, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 08, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-kyLIsdCfiZiuGlhZEOOvAXu4J-2BxQlikUE0jy6y3CM.jar
    Sep 08, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2753128918288298644.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZwZnAmef4mg8h099EhDWRGx0TSv7U6bV1H6cboSZGoI.jar
    Sep 08, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 08, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 08, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103369 bytes, hash b6fa60a944a0830af17173de67c8f60d1b043fd0a91d3e66eb48f5b48f3dc7d2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tvpgqUSggwrxcXPeZ8j2DRsEP9CpHT5m60j1tI89x9I.pb
    Sep 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 08, 2021 12:45:25 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 08, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-08_05_45_25-3610861757899371753?project=apache-beam-testing
    Sep 08, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-08_05_45_25-3610861757899371753
    Sep 08, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-08_05_45_25-3610861757899371753
    Sep 08, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-08T12:45:29.368Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 08, 2021 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:36.258Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.177Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.209Z: Expanding GroupByKey operations into optimizable parts.
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.232Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.294Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.326Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.343Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.667Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:45:37.741Z: Starting 5 workers in us-central1-c...
    Sep 08, 2021 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:46:08.261Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 08, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:46:22.755Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 08, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:46:49.150Z: Workers have started successfully.
    Sep 08, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:46:49.178Z: Workers have started successfully.
    Sep 08, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:47:20.678Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:47:20.813Z: Cleaning up.
    Sep 08, 2021 12:47:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:47:20.893Z: Stopping worker pool...
    Sep 08, 2021 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:49:43.577Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 08, 2021 12:49:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T12:49:43.639Z: Worker pool stopped.
    Sep 08, 2021 12:49:51 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-08_05_45_25-3610861757899371753 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8618766b-d55a-4471-b02a-2c84ef79a3dc and timestamp: 2021-09-08T12:49:51.180000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.474

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 12:49:52 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 44.747 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/6rvgr2bxmnsu4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2399

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2399/display/redirect>

Changes:


------------------------------------------
[...truncated 346.62 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 08, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 08, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 08, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-kyLIsdCfiZiuGlhZEOOvAXu4J-2BxQlikUE0jy6y3CM.jar
    Sep 08, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6782132080445812658.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-3QbjJJHqMxEriTCAXV-88kluvgS1-_SZ-WhihzXaVAg.jar
    Sep 08, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 08, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 08, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103368 bytes, hash 727c0de372d67e79965752641d5bdf47064c52ac4ff02dba3dfbaef8fb6eacda> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cnwN43LWfnmWV1JkHVvfRwZMUqxP8C26Pfuu-PturNo.pb
    Sep 08, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 08, 2021 6:45:13 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 08, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 08, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 08, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-07_23_45_13-1903189651028351943?project=apache-beam-testing
    Sep 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-07_23_45_13-1903189651028351943
    Sep 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-07_23_45_13-1903189651028351943
    Sep 08, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-08T06:45:16.995Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 08, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:24.106Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.167Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.201Z: Expanding GroupByKey operations into optimizable parts.
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.227Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.289Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.317Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.342Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.691Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:25.752Z: Starting 5 workers in us-central1-c...
    Sep 08, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:45:28.187Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 08, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:46:17.089Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 08, 2021 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:46:42.128Z: Workers have started successfully.
    Sep 08, 2021 6:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:46:42.154Z: Workers have started successfully.
    Sep 08, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:47:11.586Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:47:11.768Z: Cleaning up.
    Sep 08, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:47:11.838Z: Stopping worker pool...
    Sep 08, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:49:28.338Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 08, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T06:49:28.379Z: Worker pool stopped.
    Sep 08, 2021 6:49:37 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-07_23_45_13-1903189651028351943 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0a72177d-94e9-4644-980e-e86303c8f568 and timestamp: 2021-09-08T06:49:38.041000000Z:
                     Metric:                    Value:
                   read_time                     8.003
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 6:49:38 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.042 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 44.315 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/udz3vbqr3rp7c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2398

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2398/display/redirect?page=changes>

Changes:

[Valentyn Tymofieiev] Disable Kafka perf tests.

[Andrew Pilloud] [BEAM-12850] Calcite drops empty Calc now

[Andrew Pilloud] [BEAM-12853] VALUES produces a UNION, window can't be set afterwards

[Andrew Pilloud] [BEAM-12852] Revert BigTable changes, just cast to bigint

[Andrew Pilloud] [BEAM-12851] Map output table names

[Luke Cwik] [BEAM-12802] Define a prefetchable iterator and iterable and utility


------------------------------------------
[...truncated 355.76 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 08, 2021 12:47:49 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 08, 2021 12:47:50 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 08, 2021 12:47:50 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 08, 2021 12:47:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 08, 2021 12:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 08, 2021 12:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-kyLIsdCfiZiuGlhZEOOvAXu4J-2BxQlikUE0jy6y3CM.jar
    Sep 08, 2021 12:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-bAwx5NkMSLPDxr3-g_qrdxYXMRzjFUW6mzgaBETsNRQ.jar
    Sep 08, 2021 12:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7694684679123257764.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-4kZRj0umaF3cn77hcmTm-b2-W2GGBGXXh5yslarxhuU.jar
    Sep 08, 2021 12:47:55 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-JWvpue78M5Rry8dtDE_bEvYq8OTAlYmyChunpeH4wGc.jar
    Sep 08, 2021 12:47:56 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 3 files newly uploaded in 1 seconds
    Sep 08, 2021 12:47:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 08, 2021 12:47:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103368 bytes, hash ea842bec9a6fbdfa2a693d1026a3bf9c1c0b75d144afe2fe74045b873344979b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6oQr7JpvvfoqaT0QJqO_nBwLddFEr-L-dARbhzNEl5s.pb
    Sep 08, 2021 12:47:58 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 08, 2021 12:47:58 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 08, 2021 12:47:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 08, 2021 12:47:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 08, 2021 12:47:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 08, 2021 12:48:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-07_17_47_59-4372446111064550155?project=apache-beam-testing
    Sep 08, 2021 12:48:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-07_17_47_59-4372446111064550155
    Sep 08, 2021 12:48:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-07_17_47_59-4372446111064550155
    Sep 08, 2021 12:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-08T00:48:02.842Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:08.465Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.129Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.160Z: Expanding GroupByKey operations into optimizable parts.
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.179Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.250Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.278Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.323Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.677Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 12:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:09.762Z: Starting 5 workers in us-central1-a...
    Sep 08, 2021 12:48:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:29.454Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 08, 2021 12:48:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:48:57.990Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 08, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:49:23.176Z: Workers have started successfully.
    Sep 08, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:49:23.202Z: Workers have started successfully.
    Sep 08, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:49:50.630Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 08, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:49:50.772Z: Cleaning up.
    Sep 08, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:49:50.867Z: Stopping worker pool...
    Sep 08, 2021 12:53:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:53:07.077Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 08, 2021 12:53:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-08T00:53:07.129Z: Worker pool stopped.
    Sep 08, 2021 12:53:12 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-07_17_47_59-4372446111064550155 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7573067b-04dd-4f90-9db7-0e5e466713b0 and timestamp: 2021-09-08T00:53:12.536000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.608

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 08, 2021 12:53:13 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 35.27 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 51s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/6224axtaqz6a6

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2397

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2397/display/redirect?page=changes>

Changes:

[heejong] [BEAM-12838] Update artifact local path for DataflowRunner Java

[kwu] [BEAM-12828] Convert UseTestStream tests to use Long instead of Integer

[kwu] Apply SpotlessJava

[kwu] Apply SpotlessJava

[aydar.zaynutdinov] [BEAM-3385] Add requires about `equals()` and `hashMethod()` to

[aydar.zaynutdinov] [BEAM-3385] Changes regarding spotlessApply task

[noreply] Update runners/flink/job-server/flink_job_server.gradle

[heejong] separate into resolveArtifacts method

[heejong] add test

[aydar.zaynutdinov] [BEAM-3385] wrap up equals() and hashCode() methods into links

[heejong] update

[heejong] fix formatting


------------------------------------------
[...truncated 350.90 KB...]
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 07, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 07, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 07, 2021 6:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test677474641367482438.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PCNOItkRmdHNGWcFxbPh6_6KIYojTFkaL1iXMhDmgEM.jar
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 07, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103368 bytes, hash 0452d91d9f41014011d638112db710717e22caf06b606dff5d4ed3dc84f5bd15> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-BFLZHZ9BAUAR1jgRLbcQcX4iyvBrYG3_XU7T3IT1vRU.pb
    Sep 07, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 07, 2021 6:45:29 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 07, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 07, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 07, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 07, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-07_11_45_29-4344153523175646952?project=apache-beam-testing
    Sep 07, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-07_11_45_29-4344153523175646952
    Sep 07, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-07_11_45_29-4344153523175646952
    Sep 07, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-07T18:45:36.955Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 07, 2021 6:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:51.312Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.366Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.483Z: Expanding GroupByKey operations into optimizable parts.
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.598Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.705Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.740Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 07, 2021 6:45:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:52.771Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 07, 2021 6:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:53.759Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 6:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:45:53.829Z: Starting 5 workers in us-central1-c...
    Sep 07, 2021 6:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:46:18.527Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 07, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:46:33.503Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:46:59.724Z: Workers have started successfully.
    Sep 07, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:46:59.751Z: Workers have started successfully.
    Sep 07, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:47:30.754Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:47:30.908Z: Cleaning up.
    Sep 07, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:47:31.293Z: Stopping worker pool...
    Sep 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:49:54.905Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T18:49:54.942Z: Worker pool stopped.
    Sep 07, 2021 6:50:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-07_11_45_29-4344153523175646952 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 99bbca5a-af68-4f92-b504-d6f7fe647007 and timestamp: 2021-09-07T18:50:03.195000000Z:
                     Metric:                    Value:
                   read_time                     7.883
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 6:50:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.07 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.079 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 52.557 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 45s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/qhbdxo4wujdvc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2396

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2396/display/redirect>

Changes:


------------------------------------------
[...truncated 347.05 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 07, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 93a1e175b9b79d5c5e6ba2a376703e60fbed0c2390f0bd0722a1e609f1866607> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-k6Hhdbm3nVxea6KjdnA-YPvtDCOQ8L0HIqHmCfGGZgc.pb
    Sep 07, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 07, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 07, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5219788681739946429.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2nqE-IUzUHAXJkk_HoYMrpwFEMxoWZBym3zWkAA_pzc.jar
    Sep 07, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 07, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 07, 2021 12:45:10 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 07, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 07, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 07, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 07, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-07_05_45_11-849641508808864443?project=apache-beam-testing
    Sep 07, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-07_05_45_11-849641508808864443
    Sep 07, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-07_05_45_11-849641508808864443
    Sep 07, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-07T12:45:14.723Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:19.108Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:19.926Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.002Z: Expanding GroupByKey operations into optimizable parts.
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.030Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.089Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.137Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.160Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.442Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:20.533Z: Starting 5 workers in us-central1-a...
    Sep 07, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:45:49.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 07, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:04.225Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:30.316Z: Workers have started successfully.
    Sep 07, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:30.341Z: Workers have started successfully.
    Sep 07, 2021 12:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:58.071Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 12:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:58.270Z: Cleaning up.
    Sep 07, 2021 12:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:46:58.373Z: Stopping worker pool...
    Sep 07, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:49:30.825Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 07, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T12:49:30.867Z: Worker pool stopped.
    Sep 07, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-07_05_45_11-849641508808864443 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d6f18021-03aa-4931-a589-e995715e5182 and timestamp: 2021-09-07T12:49:37.908000000Z:
                     Metric:                    Value:
                   read_time                     8.327
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 12:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 45.23 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kd4ebeilufxjk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2395

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2395/display/redirect>

Changes:


------------------------------------------
[...truncated 347.71 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 07, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 07, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 07, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 07, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 4bbcdd6a36c080ec0c85fe822d3dfe5865b704a77973fcac82c2e73bebd75e3b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-S7zdajbAgOwMhf6CLT3-WGW3BKd5c_ysgsLnO-vXXjs.pb
    Sep 07, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 07, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 07, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2888780114063650351.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zl-Vnbm9g0Ghhf58H5zjyDulKiFYFCz0r4eF-40e_gM.jar
    Sep 07, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 07, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 07, 2021 6:45:11 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 07, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 07, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 07, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 07, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-06_23_45_11-12810481183632081909?project=apache-beam-testing
    Sep 07, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-06_23_45_11-12810481183632081909
    Sep 07, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-06_23_45_11-12810481183632081909
    Sep 07, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-07T06:45:15.050Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 07, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.086Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.829Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.868Z: Expanding GroupByKey operations into optimizable parts.
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.896Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.960Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:20.986Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:21.031Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:21.335Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:21.396Z: Starting 5 workers in us-central1-a...
    Sep 07, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:42.037Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 07, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:59.908Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:45:59.939Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 07, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:46:10.320Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:46:32.048Z: Workers have started successfully.
    Sep 07, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:46:32.076Z: Workers have started successfully.
    Sep 07, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:46:59.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:47:00.036Z: Cleaning up.
    Sep 07, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:47:00.105Z: Stopping worker pool...
    Sep 07, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:49:25.974Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 07, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T06:49:26.009Z: Worker pool stopped.
    Sep 07, 2021 6:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-06_23_45_11-12810481183632081909 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cdeb74cc-82cf-4b62-805d-6846bfe5148b and timestamp: 2021-09-07T06:49:32.801000000Z:
                     Metric:                    Value:
                   read_time                     9.354
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 6:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 40.237 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4ntintwdoevpg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2394

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2394/display/redirect>

Changes:


------------------------------------------
[...truncated 347.62 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 07, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 07, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 07, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 07, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112461 bytes, hash a118467dd49ffcef4ecdea92d2796b3ae3476b4c3618ac6eb94942e75f7d8495> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-oRhGfdSf_O9OzeqS0nlrOuNHa0w2GKxuuUlC5199hJU.pb
    Sep 07, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 07, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6664303331593150101.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8YoqFKtqp_alQIdoUI-5gFBc3xe7rODNpfG4KqeOblU.jar
    Sep 07, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 07, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 07, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 07, 2021 12:45:10 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 07, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 07, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 07, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-06_17_45_11-10672721731060003522?project=apache-beam-testing
    Sep 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-06_17_45_11-10672721731060003522
    Sep 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-06_17_45_11-10672721731060003522
    Sep 07, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-07T00:45:14.804Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:21.127Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:21.837Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:21.876Z: Expanding GroupByKey operations into optimizable parts.
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:21.902Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:21.988Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:22.006Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:22.042Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 07, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:22.385Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:22.464Z: Starting 5 workers in us-central1-c...
    Sep 07, 2021 12:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:52.728Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 12:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:52.749Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 07, 2021 12:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:45:55.308Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 07, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:46:03.038Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 07, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:46:28.447Z: Workers have started successfully.
    Sep 07, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:46:28.486Z: Workers have started successfully.
    Sep 07, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:47:00.630Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 07, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:47:00.780Z: Cleaning up.
    Sep 07, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:47:00.860Z: Stopping worker pool...
    Sep 07, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:49:26.894Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 07, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-07T00:49:26.935Z: Worker pool stopped.
    Sep 07, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-06_17_45_11-10672721731060003522 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 449011c2-af89-4435-8473-3c2f4e5f462b and timestamp: 2021-09-07T00:49:35.071000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.047

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 07, 2021 12:49:35 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 42.078 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/sag6kpmy7pk6i

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2393

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2393/display/redirect>

Changes:


------------------------------------------
[...truncated 346.90 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 06, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 06, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 06, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 06, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 06, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 711a43c9ae60ad1d7196fc713366ff4774f6028910ae161aa48fcf1167bf99a1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cRpDya5grR1xlvxxM2b_R3T2AokQrhYapI_PEWe_maE.pb
    Sep 06, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 06, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2113710927747881169.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-U7pVFP8_PvmAk8Chiic-7gdat8RRbpW7hHQpqwqcSiQ.jar
    Sep 06, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 06, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 06, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 06, 2021 6:45:11 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 06, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 06, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 06, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 06, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-06_11_45_12-233892037897958227?project=apache-beam-testing
    Sep 06, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-06_11_45_12-233892037897958227
    Sep 06, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-06_11_45_12-233892037897958227
    Sep 06, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-06T18:45:15.591Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:22.115Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:22.793Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:22.831Z: Expanding GroupByKey operations into optimizable parts.
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:22.881Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:22.997Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:23.026Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:23.080Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:23.400Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:23.490Z: Starting 5 workers in us-central1-c...
    Sep 06, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:45:31.785Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 06, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:46:07.920Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 06, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:46:33.322Z: Workers have started successfully.
    Sep 06, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:46:33.345Z: Workers have started successfully.
    Sep 06, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:47:03.215Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:47:03.373Z: Cleaning up.
    Sep 06, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:47:03.445Z: Stopping worker pool...
    Sep 06, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:49:19.044Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 06, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T18:49:19.190Z: Worker pool stopped.
    Sep 06, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-06_11_45_12-233892037897958227 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 56182a06-0f10-4fde-83bd-92d0d27c6801 and timestamp: 2021-09-06T18:49:28.073000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.974

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 6:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 35.21 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/tmtihfpabalhg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2392

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2392/display/redirect>

Changes:


------------------------------------------
[...truncated 350.93 KB...]
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 06, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 06, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 06, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 06, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 06, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112459 bytes, hash bda40eb7cf50cf89268623dcf58f7cb2d3c65a21c8c558b162a31126adb7059a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vaQOt89Qz4kmhiPc9Y98stPGWiHIxVixYqMRJq23BZo.pb
    Sep 06, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 06, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.8.3/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-flCHuvnqyUfqN7HJoR2_KDHap0ZeyuyWltYseI4uf_w.jar
    Sep 06, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 06, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5392601417418364659.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_Kg7mctPBChh6eDCtHCN3NraU4acMGY59opu0ABAx0M.jar
    Sep 06, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.m2/repository/org/apache/beam/beam-vendor-grpc-1_36_0/0.2/beam-vendor-grpc-1_36_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-grpc-1_36_0-0.2-ARJ9IUTlyXpbZnCRXyAjusfKQ3IZ_alLBvpCeiWl3YQ.jar
    Sep 06, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 3 files newly uploaded in 1 seconds
    Sep 06, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 06, 2021 12:45:23 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 06, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 06, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 06, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 06, 2021 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-06_05_45_23-1807458484984207314?project=apache-beam-testing
    Sep 06, 2021 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-06_05_45_23-1807458484984207314
    Sep 06, 2021 12:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-06_05_45_23-1807458484984207314
    Sep 06, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-06T12:45:27.027Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 06, 2021 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:35.958Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.834Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.862Z: Expanding GroupByKey operations into optimizable parts.
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.876Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.941Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.970Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:36.998Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:37.385Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:45:37.448Z: Starting 5 workers in us-central1-c...
    Sep 06, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:03.132Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 06, 2021 12:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:17.613Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 06, 2021 12:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:17.713Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 06, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:28.143Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 06, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:52.650Z: Workers have started successfully.
    Sep 06, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:46:52.681Z: Workers have started successfully.
    Sep 06, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:47:23.226Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:47:23.357Z: Cleaning up.
    Sep 06, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:47:23.427Z: Stopping worker pool...
    Sep 06, 2021 12:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:49:43.709Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 06, 2021 12:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T12:49:43.763Z: Worker pool stopped.
    Sep 06, 2021 12:49:49 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-06_05_45_23-1807458484984207314 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f1795ea7-0c86-46e9-bc81-eddab3de7b75 and timestamp: 2021-09-06T12:49:49.613000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.098

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 12:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 45.99 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/bzozrrtvrwltq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2391

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2391/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #14927 from [BEAM-12400] MongoDBIO support for update


------------------------------------------
[...truncated 351.07 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 06, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 06, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash cccdde8303397054e696647b9b4abdf7f20fa5bb4811f6f546012c086472b75c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zM3egwM5cFTmlmR7m0q99_IPpbtIEfb1RgEsCGRyt1w.pb
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3458559268242123816.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kueNokXYV5Y0IYBOcmqyfoPgFMmWGBezg9eOeTjXJYw.jar
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 3 files newly uploaded in 0 seconds
    Sep 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 06, 2021 6:45:32 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 06, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-05_23_45_33-10006167550902779687?project=apache-beam-testing
    Sep 06, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-05_23_45_33-10006167550902779687
    Sep 06, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-05_23_45_33-10006167550902779687
    Sep 06, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-06T06:45:36.879Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:41.637Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.407Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.467Z: Expanding GroupByKey operations into optimizable parts.
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.483Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.546Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 06, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.573Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.606Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.918Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:45:42.985Z: Starting 5 workers in us-central1-a...
    Sep 06, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:46:04.687Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 06, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:46:31.967Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 06, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:46:58.287Z: Workers have started successfully.
    Sep 06, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:46:58.314Z: Workers have started successfully.
    Sep 06, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:47:26.479Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:47:26.612Z: Cleaning up.
    Sep 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:47:26.686Z: Stopping worker pool...
    Sep 06, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:49:46.103Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 06, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T06:49:46.136Z: Worker pool stopped.
    Sep 06, 2021 6:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-05_23_45_33-10006167550902779687 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f4a1676b-cdc2-4474-89b1-c1c512a43618 and timestamp: 2021-09-06T06:49:51.495000000Z:
                     Metric:                    Value:
                   read_time                     9.398
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 6:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 36.84 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/5i7kkok6orbgk

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2390

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2390/display/redirect>

Changes:


------------------------------------------
[...truncated 346.87 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 06, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 06, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 06, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 06, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 06, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 06, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112459 bytes, hash 68dae5563e39d962cf202919e180e150903c6c950bae892ca0a5dfdcdca8c9cb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aNrlVj452WLPICkZ4YDhUJA8bJULroksoKXf3Nyoycs.pb
    Sep 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2081981948379655649.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wC8ji88uIqqXvXRR1z84kSHT2YTMTVoAP72AT_wBUT8.jar
    Sep 06, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 06, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 06, 2021 12:45:10 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 06, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 06, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 06, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 06, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-05_17_45_10-15242399150055407121?project=apache-beam-testing
    Sep 06, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-05_17_45_10-15242399150055407121
    Sep 06, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-05_17_45_10-15242399150055407121
    Sep 06, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-06T00:45:14.697Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.040Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.727Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.765Z: Expanding GroupByKey operations into optimizable parts.
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.785Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.841Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.866Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:19.901Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:20.182Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:20.259Z: Starting 5 workers in us-central1-a...
    Sep 06, 2021 12:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:45:51.457Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 06, 2021 12:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:46:05.234Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 06, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:46:31.608Z: Workers have started successfully.
    Sep 06, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:46:31.656Z: Workers have started successfully.
    Sep 06, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:47:00.599Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 06, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:47:00.754Z: Cleaning up.
    Sep 06, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:47:00.823Z: Stopping worker pool...
    Sep 06, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:49:26.150Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 06, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-06T00:49:26.197Z: Worker pool stopped.
    Sep 06, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-05_17_45_10-15242399150055407121 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4ffeddae-4881-41f2-930f-132e1b8a487c and timestamp: 2021-09-06T00:49:32.837000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.433

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 06, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 40.293 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/lldkllqzxtpes

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2389

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2389/display/redirect>

Changes:


------------------------------------------
[...truncated 346.72 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 05, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 05, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 331ee535d19f3c12e28d954487172e57358e077f111a4a9885e3d77025e8c324> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Mx7lNdGfPBLijZVEhxcuVzWOB38RGkqYhePXcCXowyQ.pb
    Sep 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7536243845373742245.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wp0Vg1lfBJKx5Q0-fvd7Ubyd7BRjVWIKP5LRprCiovs.jar
    Sep 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 05, 2021 6:45:08 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 05, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-05_11_45_08-10811630702578601354?project=apache-beam-testing
    Sep 05, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-05_11_45_08-10811630702578601354
    Sep 05, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-05_11_45_08-10811630702578601354
    Sep 05, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-05T18:45:12.227Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:16.758Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.485Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.525Z: Expanding GroupByKey operations into optimizable parts.
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.552Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.650Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.677Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:17.709Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:18.010Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:18.076Z: Starting 5 workers in us-central1-a...
    Sep 05, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:45:43.973Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 05, 2021 6:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:46:09.696Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 05, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:46:36.854Z: Workers have started successfully.
    Sep 05, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:46:36.874Z: Workers have started successfully.
    Sep 05, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:47:04.356Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:47:04.465Z: Cleaning up.
    Sep 05, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:47:04.521Z: Stopping worker pool...
    Sep 05, 2021 6:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:49:27.479Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 05, 2021 6:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T18:49:27.534Z: Worker pool stopped.
    Sep 05, 2021 6:49:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-05_11_45_08-10811630702578601354 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d514a0ce-bcf1-47f8-bca6-cb07511bfb21 and timestamp: 2021-09-05T18:49:32.843000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.541

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 6:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 42.809 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xw5simwt52532

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2388

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2388/display/redirect>

Changes:


------------------------------------------
[...truncated 376.39 KB...]


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 1:34:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 05, 2021 1:34:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 05, 2021 1:34:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 05, 2021 1:34:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 05, 2021 1:34:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 05, 2021 1:34:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112459 bytes, hash 78c8b2edbe2b55892955aed83347386df3a05337af4d112934234ca29a48bc21> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eMiy7b4rVYkpVa7YM0c4bfOgUzevTREpNCNMoppIvCE.pb
    Sep 05, 2021 1:34:55 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 05, 2021 1:34:55 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 05, 2021 1:34:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1279157143094325105.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1nPVfEoA5Bk7WAyYcYKuM7ASyQO-BtC7nlMtexpvXgI.jar
    Sep 05, 2021 1:34:57 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.m2/repository/org/apache/beam/beam-vendor-grpc-1_36_0/0.2/beam-vendor-grpc-1_36_0-0.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-grpc-1_36_0-0.2-nVxp3_1Xbd1ktCTMOtQxL1xQUUS_NwN-RSOhZ5xBwdg.jar
    Sep 05, 2021 1:34:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 2 files newly uploaded in 2 seconds
    Sep 05, 2021 1:34:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 05, 2021 1:34:58 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 05, 2021 1:34:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 05, 2021 1:34:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 05, 2021 1:34:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 05, 2021 1:34:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-05_06_34_58-11208989136571102888?project=apache-beam-testing
    Sep 05, 2021 1:34:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-05_06_34_58-11208989136571102888
    Sep 05, 2021 1:34:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-05_06_34_58-11208989136571102888
    Sep 05, 2021 1:35:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-05T13:35:02.402Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:06.768Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.301Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.330Z: Expanding GroupByKey operations into optimizable parts.
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.362Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.438Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.473Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 05, 2021 1:35:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.507Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 05, 2021 1:35:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.806Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 1:35:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:07.885Z: Starting 5 workers in us-central1-a...
    Sep 05, 2021 1:35:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:17.092Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 05, 2021 1:35:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:35:54.082Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 05, 2021 1:36:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:36:22.443Z: Workers have started successfully.
    Sep 05, 2021 1:36:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:36:22.466Z: Workers have started successfully.
    Sep 05, 2021 1:36:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:36:46.374Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 1:36:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:36:46.516Z: Cleaning up.
    Sep 05, 2021 1:36:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:36:46.592Z: Stopping worker pool...
    Sep 05, 2021 1:39:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:39:07.734Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 05, 2021 1:39:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T13:39:07.783Z: Worker pool stopped.
    Sep 05, 2021 1:39:14 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-05_06_34_58-11208989136571102888 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2d97a785-7a81-4759-832a-a760c358f614 and timestamp: 2021-09-05T13:39:14.847000000Z:
                     Metric:                    Value:
                   read_time                     6.515
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 1:39:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 39.79 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 16s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/uzswpvyslljcq

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2387

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2387/display/redirect>

Changes:


------------------------------------------
[...truncated 357.68 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@72019671]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 6:49:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 05, 2021 6:49:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 05, 2021 6:49:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 05, 2021 6:49:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 05, 2021 6:49:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 05, 2021 6:49:36 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112458 bytes, hash ba890ab37a3bb1648c134de1cc6338922f829634d8682dfd223beb65fdbc6080> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uokKs3o7sWSME03hzGM4ki-CljTYaC39IjvrZf28YIA.pb
    Sep 05, 2021 6:49:44 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 05, 2021 6:49:45 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 05, 2021 6:49:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1397531034086354658.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DJkiL0IMl6536bjJglv_tQMKLx3lMnKSJZRy7qcwStU.jar
    Sep 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 5 seconds
    Sep 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 05, 2021 6:49:50 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 05, 2021 6:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 05, 2021 6:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 05, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 05, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-04_23_49_53-10880493921920342660?project=apache-beam-testing
    Sep 05, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-04_23_49_53-10880493921920342660
    Sep 05, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-04_23_49_53-10880493921920342660
    Sep 05, 2021 6:49:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-05T06:49:56.711Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:02.935Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.511Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.555Z: Expanding GroupByKey operations into optimizable parts.
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.587Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.659Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.692Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:03.726Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:04.142Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:04.222Z: Starting 5 workers in us-central1-c...
    Sep 05, 2021 6:50:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:32.292Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 05, 2021 6:50:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:50:49.604Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 05, 2021 6:51:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:51:14.650Z: Workers have started successfully.
    Sep 05, 2021 6:51:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:51:14.684Z: Workers have started successfully.
    Sep 05, 2021 6:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:51:44.035Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 6:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:51:44.198Z: Cleaning up.
    Sep 05, 2021 6:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:51:44.266Z: Stopping worker pool...
    Sep 05, 2021 6:54:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:54:05.921Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 05, 2021 6:54:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T06:54:05.970Z: Worker pool stopped.
    Sep 05, 2021 6:54:11 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-04_23_49_53-10880493921920342660 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 15539c25-47ac-431c-85cf-9f78ac6c2158 and timestamp: 2021-09-05T06:54:11.921000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.324

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 6:54:12 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.094 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.09 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 48.465 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 42s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/psv7h3nv4lyak

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2386

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2386/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12769] Adds support for expanding a Java cross-language transform


------------------------------------------
[...truncated 352.86 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2134971427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 05, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 05, 2021 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 05, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 05, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112463 bytes, hash f41cb218a1041a39cf3cf7afdb72df9a847c5a7d57d926c1dad92dd8bd1e3f95> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9ByyGKEEGjnPPPev23LfmoR8Wn1X2SbB2tkt2L0eP5U.pb
    Sep 05, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 05, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 05, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2707828256434146384.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jf_V0bMI3mpHFu88TUVTELO7vySycrb3r_Bta5kQDEQ.jar
    Sep 05, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 05, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 05, 2021 12:45:32 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 05, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 05, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 05, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 05, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-04_17_45_33-10693535116842576941?project=apache-beam-testing
    Sep 05, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-04_17_45_33-10693535116842576941
    Sep 05, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-04_17_45_33-10693535116842576941
    Sep 05, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-05T00:45:36.848Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 05, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:43.482Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 05, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.128Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 05, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.164Z: Expanding GroupByKey operations into optimizable parts.
    Sep 05, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.200Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 05, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.273Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 05, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.307Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 05, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.348Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 05, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.812Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:45:44.900Z: Starting 5 workers in us-central1-c...
    Sep 05, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:07.834Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 05, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:19.665Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 05, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:19.693Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 05, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:29.998Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 05, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:53.614Z: Workers have started successfully.
    Sep 05, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:46:53.691Z: Workers have started successfully.
    Sep 05, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:47:24.873Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 05, 2021 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:47:25.026Z: Cleaning up.
    Sep 05, 2021 12:47:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:47:25.120Z: Stopping worker pool...
    Sep 05, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:49:42.646Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 05, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-05T00:49:42.694Z: Worker pool stopped.
    Sep 05, 2021 12:49:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-04_17_45_33-10693535116842576941 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f926465d-b3e6-40ed-9814-76040c167621 and timestamp: 2021-09-05T00:49:48.149000000Z:
                     Metric:                    Value:
                   read_time                     9.682
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 05, 2021 12:49:48 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 34.361 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/4u2tvivgrblcw

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2385

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2385/display/redirect>

Changes:


------------------------------------------
[...truncated 347.67 KB...]
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 04, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 04, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 04, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 04, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112461 bytes, hash ecf29a12b161196abd947e61668899f94fe6ec1fe7866013d31e744d8cd6c442> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7PKaErFhGWq9lH5hZoiZ-U_m7B_nhmAT0x50TYzWxEI.pb
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3084273916801856233.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TPyAvegaVjPfkTwlGinwhxXob9UjXOPbZjd03XwpZD4.jar
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Sep 04, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Sep 04, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 243 files cached, 5 files newly uploaded in 0 seconds
    Sep 04, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 04, 2021 6:45:08 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 04, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 04, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 04, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 04, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-04_11_45_08-8099499084905887704?project=apache-beam-testing
    Sep 04, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-04_11_45_08-8099499084905887704
    Sep 04, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-04_11_45_08-8099499084905887704
    Sep 04, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-04T18:45:12.097Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:19.724Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.499Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.532Z: Expanding GroupByKey operations into optimizable parts.
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.563Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.632Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.668Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:20.703Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:21.081Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:21.161Z: Starting 5 workers in us-central1-c...
    Sep 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:45:33.227Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 04, 2021 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:46:05.860Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 04, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:46:30.741Z: Workers have started successfully.
    Sep 04, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:46:30.771Z: Workers have started successfully.
    Sep 04, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:47:01.658Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:47:01.881Z: Cleaning up.
    Sep 04, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:47:01.992Z: Stopping worker pool...
    Sep 04, 2021 6:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:49:19.784Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 04, 2021 6:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T18:49:19.818Z: Worker pool stopped.
    Sep 04, 2021 6:49:25 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-04_11_45_08-8099499084905887704 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8f7b30f5-0af2-498b-86f6-a390c8855b77 and timestamp: 2021-09-04T18:49:25.088000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.303

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 6:49:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 35.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5zh6rmapekmfa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2384

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2384/display/redirect>

Changes:


------------------------------------------
[...truncated 346.89 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 04, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 04, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 04, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 04, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 04, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112459 bytes, hash 26c7ed55e31cbda68728b581403b027ebb42182e9a7e895eb7a0defb3088c8a6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JsftVeMcvaaHKLWBQDsCfrtCGC6afolet6De-zCIyKY.pb
    Sep 04, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7050467490238436215.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-K8vlJNYVl3mXkMVuFyLisVTl64pJsfw8H3wy39zXD98.jar
    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 04, 2021 12:45:13 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 04, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-04_05_45_14-4678713245512965619?project=apache-beam-testing
    Sep 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-04_05_45_14-4678713245512965619
    Sep 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-04_05_45_14-4678713245512965619
    Sep 04, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-04T12:45:17.341Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 04, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:22.576Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.309Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.349Z: Expanding GroupByKey operations into optimizable parts.
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.375Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.439Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.468Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.495Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.840Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:23.908Z: Starting 5 workers in us-central1-c...
    Sep 04, 2021 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:45:36.339Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 04, 2021 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:46:09.856Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 04, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:46:35.394Z: Workers have started successfully.
    Sep 04, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:46:35.426Z: Workers have started successfully.
    Sep 04, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:47:03.940Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:47:04.091Z: Cleaning up.
    Sep 04, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:47:04.177Z: Stopping worker pool...
    Sep 04, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:49:22.360Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 04, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T12:49:22.399Z: Worker pool stopped.
    Sep 04, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-04_05_45_14-4678713245512965619 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 95d345fa-ad6d-487e-bb39-d90b3bd9b1af and timestamp: 2021-09-04T12:49:27.956000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.659

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 34.895 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6p72oxi7zo6ga

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2383

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2383/display/redirect>

Changes:


------------------------------------------
[...truncated 347.27 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 04, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 04, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 04, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112461 bytes, hash 534377993bcd3c3a171072dc27c937ef03b56ce5fe894be8687414c5d3dc38b8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-U0N3mTvNPDoXEHLcJ8k37wO1bOX-iUvoaHQUxdPcOLg.pb
    Sep 04, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 04, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5268561719917520832.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6oqD1whRXoQ6yzDZpHOvqd5p5vxwSFJo4IcBF59oY3c.jar
    Sep 04, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 04, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 04, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 04, 2021 6:45:11 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-03_23_45_12-2301445323111670089?project=apache-beam-testing
    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-03_23_45_12-2301445323111670089
    Sep 04, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-03_23_45_12-2301445323111670089
    Sep 04, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-04T06:45:15.483Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:20.939Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.722Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.769Z: Expanding GroupByKey operations into optimizable parts.
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.816Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.882Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.918Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:21.951Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:22.279Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:22.345Z: Starting 5 workers in us-central1-a...
    Sep 04, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:45:32.545Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 04, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:46:08.178Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 04, 2021 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:46:34.026Z: Workers have started successfully.
    Sep 04, 2021 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:46:34.057Z: Workers have started successfully.
    Sep 04, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:47:01.774Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:47:01.903Z: Cleaning up.
    Sep 04, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:47:01.980Z: Stopping worker pool...
    Sep 04, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:49:21.464Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 04, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T06:49:21.511Z: Worker pool stopped.
    Sep 04, 2021 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-03_23_45_12-2301445323111670089 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c669b485-edcd-46b4-8afc-70612dc56ade and timestamp: 2021-09-04T06:49:27.517000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.149

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 6:49:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.043 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 34.236 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a6q5lud2o6yau

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2382

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2382/display/redirect>

Changes:


------------------------------------------
[...truncated 347.82 KB...]
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 04, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 04, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 04, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 04, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 04, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112459 bytes, hash b111799e38c3a6be8ecbf7bd01c73aea54afbb5883e872b652262ca5720e2184> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sRF5njjDpr6Oy_e9Acc66lSvu1iD6HK2UiYspXIOIYQ.pb
    Sep 04, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7519257491340368708.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lU6yTasXEy1jKZc6V84VLRmEPBoMpZZhxZzo1R-6ALE.jar
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 2 files newly uploaded in 0 seconds
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 04, 2021 12:45:11 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 04, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 04, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 04, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-03_17_45_12-18161074009312431671?project=apache-beam-testing
    Sep 04, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-03_17_45_12-18161074009312431671
    Sep 04, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-03_17_45_12-18161074009312431671
    Sep 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-04T00:45:15.504Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:20.637Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.298Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.341Z: Expanding GroupByKey operations into optimizable parts.
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.375Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.442Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.470Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.504Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.872Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:21.945Z: Starting 5 workers in us-central1-a...
    Sep 04, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:46.022Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 04, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:59.192Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 04, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:45:59.217Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 04, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:46:09.466Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 04, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:46:33.922Z: Workers have started successfully.
    Sep 04, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:46:33.958Z: Workers have started successfully.
    Sep 04, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:47:01.658Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 04, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:47:01.795Z: Cleaning up.
    Sep 04, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:47:01.862Z: Stopping worker pool...
    Sep 04, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:49:29.082Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 04, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-04T00:49:29.134Z: Worker pool stopped.
    Sep 04, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-03_17_45_12-18161074009312431671 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): afc124b2-d3df-4bf3-b447-60d39de3bdf2 and timestamp: 2021-09-04T00:49:35.695000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.903

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 04, 2021 12:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.676 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2f4ix2irnoptu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2381

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2381/display/redirect>

Changes:


------------------------------------------
[...truncated 346.84 KB...]
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70318993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 03, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 03, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 03, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 03, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 9925b8aebfc1d4920ccbea8ef72f91fbb7e0d1b9812bd48b266d817398fa63b8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mSW4rr_B1JIMy-qO9y-R-7fg0bmBK9SLJm2Bc5j6Y7g.pb
    Sep 03, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 03, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test101276196055653085.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YRglKCfmXPqyJHwBe-8liFPzC0Yx-0C9HsvWNnaIa54.jar
    Sep 03, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 03, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 0 seconds
    Sep 03, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 03, 2021 6:45:31 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 03, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 03, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 03, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 03, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-03_11_45_32-10410911850525840854?project=apache-beam-testing
    Sep 03, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-03_11_45_32-10410911850525840854
    Sep 03, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-03_11_45_32-10410911850525840854
    Sep 03, 2021 6:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-03T18:45:35.566Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:43.672Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.453Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.498Z: Expanding GroupByKey operations into optimizable parts.
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.528Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.604Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.629Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:44.664Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:45.147Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:45:45.233Z: Starting 5 workers in us-central1-c...
    Sep 03, 2021 6:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:46:15.004Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 03, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:46:30.091Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 03, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:46:55.896Z: Workers have started successfully.
    Sep 03, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:46:55.924Z: Workers have started successfully.
    Sep 03, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:47:27.166Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:47:27.309Z: Cleaning up.
    Sep 03, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:47:27.401Z: Stopping worker pool...
    Sep 03, 2021 6:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:49:47.444Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 03, 2021 6:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T18:49:47.496Z: Worker pool stopped.
    Sep 03, 2021 6:49:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-03_11_45_32-10410911850525840854 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 48eea044-a623-4a09-89dc-c7f1e5afca36 and timestamp: 2021-09-03T18:49:53Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.644

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 6:49:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 42.734 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/eb3crdwfzk5ty

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2380

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2380/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-11873] Add support for writes with returning values in JdbcIO


------------------------------------------
[...truncated 361.94 KB...]
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1121334267]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 12:50:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 03, 2021 12:50:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 03, 2021 12:50:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 03, 2021 12:50:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 03, 2021 12:50:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 03, 2021 12:50:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 2e354590ef801a216f260851173a8ca974f7d28030aa974a58fb40f5ed76feaa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LjVFkO-AGiFvJghRFzqMqXT30oAwqpdKWPtA9e12_qo.pb
    Sep 03, 2021 12:50:37 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 03, 2021 12:50:38 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 03, 2021 12:50:38 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3018342601264430199.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EQWikt4wAkgOHsKMlo92X1AI8bsSGjLtuzLANCPfRXk.jar
    Sep 03, 2021 12:50:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 1 files newly uploaded in 6 seconds
    Sep 03, 2021 12:50:45 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 03, 2021 12:50:45 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 03, 2021 12:50:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 03, 2021 12:50:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 03, 2021 12:50:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 03, 2021 12:50:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-03_05_50_48-7456646592635181481?project=apache-beam-testing
    Sep 03, 2021 12:50:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-03_05_50_48-7456646592635181481
    Sep 03, 2021 12:50:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-03_05_50_48-7456646592635181481
    Sep 03, 2021 12:50:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-03T12:50:51.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 03, 2021 12:50:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:50:59.206Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.073Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.101Z: Expanding GroupByKey operations into optimizable parts.
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.132Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.205Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.244Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.275Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.660Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 12:51:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:00.739Z: Starting 5 workers in us-central1-c...
    Sep 03, 2021 12:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:25.158Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 03, 2021 12:51:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:36.762Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 03, 2021 12:51:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:36.791Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 03, 2021 12:51:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:51:47.149Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 03, 2021 12:52:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:52:10.640Z: Workers have started successfully.
    Sep 03, 2021 12:52:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:52:10.668Z: Workers have started successfully.
    Sep 03, 2021 12:52:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:52:40.301Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 12:52:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:52:40.452Z: Cleaning up.
    Sep 03, 2021 12:52:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:52:40.540Z: Stopping worker pool...
    Sep 03, 2021 12:54:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:54:57.756Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 03, 2021 12:54:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T12:54:57.833Z: Worker pool stopped.
    Sep 03, 2021 12:55:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-03_05_50_48-7456646592635181481 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c1a15772-5fc4-4c23-9853-48611afba533 and timestamp: 2021-09-03T12:55:04.222000000Z:
                     Metric:                    Value:
                   read_time                      8.35
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 12:55:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.248 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.23 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 46.996 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 22s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/zrweouhbad3ic

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2379

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2379/display/redirect?page=changes>

Changes:

[Andrew Pilloud] [BEAM-12680] Calcite SqlTransform no longer experimental


------------------------------------------
[...truncated 351.35 KB...]


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@286566052]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 03, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 03, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 03, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 03, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 03, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 408cd3d127337593eb7a3f0cf8ca306acfc7762cd8c387b011327a5cfa310e31> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QIzT0SczdZPrej8M-Mowas_HdizYw4ewETJ6XPoxDjE.pb
    Sep 03, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 03, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-NzuIuC0tWIGa8mV7asF_NOtTs4zf7xUyMI972IiYOJo.jar
    Sep 03, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9092202468420968078.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BdLfzE8ukUtVSY0FvVhGjkcHODSRk6YRre6SD9JtTPE.jar
    Sep 03, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 03, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 2 files newly uploaded in 0 seconds
    Sep 03, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 03, 2021 6:45:31 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 03, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 03, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 03, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 03, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-02_23_45_31-6395771659507821382?project=apache-beam-testing
    Sep 03, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-02_23_45_31-6395771659507821382
    Sep 03, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-02_23_45_31-6395771659507821382
    Sep 03, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-03T06:45:35.171Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 03, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:40.138Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:40.952Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:40.999Z: Expanding GroupByKey operations into optimizable parts.
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.027Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.092Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.118Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.150Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.478Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:45:41.545Z: Starting 5 workers in us-central1-a...
    Sep 03, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:46:12.507Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 03, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:46:25.361Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 03, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:46:55.167Z: Workers have started successfully.
    Sep 03, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:46:55.188Z: Workers have started successfully.
    Sep 03, 2021 6:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:47:21.396Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 6:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:47:21.514Z: Cleaning up.
    Sep 03, 2021 6:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:47:21.582Z: Stopping worker pool...
    Sep 03, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:49:42.524Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 03, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T06:49:42.569Z: Worker pool stopped.
    Sep 03, 2021 6:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-02_23_45_31-6395771659507821382 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 15639960-c59e-4ada-9121-07c04f0b0da3 and timestamp: 2021-09-03T06:49:51.615000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.425

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 6:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 38.174 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/7bs5ms5dihfh4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2378

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2378/display/redirect?page=changes>

Changes:

[Andrew Pilloud] s/org.apache.beam.vendor.calcite.v1_20_0/org.apache.beam.vendor.calcite.v1_26_0/g

[Andrew Pilloud] [BEAM-9379] Update to vendored Calcite to 1.26.0

[Andrew Pilloud] Fix flattened rows

[Andrew Pilloud] Fix DDL

[Andrew Pilloud] Handle BeamRelNode in RelSubset

[Andrew Pilloud] Fix BeamIOPushDown

[Andrew Pilloud] [BEAM-9190] Update BeamBigQuerySqlDialect

[Andrew Pilloud] Remap IN to Search

[Andrew Pilloud] [BEAM-9379] Use byte[] instead of ByteString for (VAR)BINARY in UDFs.

[Andrew Pilloud] [BEAM-9379] Update UDF NULL type mismatch test since there is stricter

[Andrew Pilloud] Fix ZetaSQL window function mapping

[Andrew Pilloud] Fix Bigtable tests that depend on SQL types

[Andrew Pilloud] Workaround CALCITE-4759 in JoinPushThroughJoinRule

[Andrew Pilloud] Disable nested bytes tests, sorry!

[Andrew Pilloud] SqlLine is rotting, Just CAST types for now

[Andrew Pilloud] Update CHANGES.md

[Andrew Pilloud] Up spotbug stack size

[Andrew Pilloud] Fix BeamMatchRel copy

[Andrew Pilloud] partitionKey everywhere

[Andrew Pilloud] Make it functional

[Andrew Pilloud] Update CreateFunction

[Andrew Pilloud] No tpcds dependency

[Andrew Pilloud] Fix default time types

[Ankur Goenka] fixing release date

[noreply] [BEAM-12823] TestStream Support in Samza Portable Runner (#15421)


------------------------------------------
[...truncated 364.70 KB...]
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 03, 2021 12:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 03, 2021 12:46:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 03, 2021 12:46:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 03, 2021 12:46:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <112460 bytes, hash 7e29938c75dd500f595719334d07a824da330071ee38dce689c08b1d088e6ed8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fimTjHXdUA9ZVxkzTQeoJNozAHHuONzmicCLHQiObtg.pb
    Sep 03, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 248 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 03, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4667068793271008819.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-FaVVKvpkSVt0TquI7OuZqoyo34Rbag65zR13YcFz-HY.jar
    Sep 03, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-O4FtkomfozNZ5s1ThCH4YVQsv5jjgpXjdfBRuAgh72g.jar
    Sep 03, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-QwBCBtprx84y_3PbjBTvra8IZ_z5DGx5ADyBhvZp408.jar
    Sep 03, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-b_iPDtxotfaDaPPyMCbCTp8vYSu2A6yRI3xwtrNkFTk.jar
    Sep 03, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Sep 03, 2021 12:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 244 files cached, 4 files newly uploaded in 1 seconds
    Sep 03, 2021 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 03, 2021 12:46:10 AM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
    SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=13, target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*
        Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
    java.lang.RuntimeException: ManagedChannel allocation site
    	at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
    	at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
    	at io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:615)
    	at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:327)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1700(InstantiatingGrpcChannelProvider.java:74)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:220)
    	at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:227)
    	at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:210)
    	at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:138)
    	at com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:128)
    	at com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:109)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1263)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:139)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:510)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:455)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:172)
    	at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:973)
    	at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:662)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:581)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:585)
    	at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:240)
    	at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:214)
    	at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:469)
    	at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:598)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:322)
    	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:309)
    	at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethodPushDown(BigQueryIOPushDownIT.java:136)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
    	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
    	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
    	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
    	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
    	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
    	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    	at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    	at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
    	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
    	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
    	at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
    	at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
    	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
    	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
    	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
    	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
    	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
    	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
    	at java.lang.Thread.run(Thread.java:748)

    Sep 03, 2021 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 03, 2021 12:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 03, 2021 12:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 03, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-02_17_46_10-9110703423107782645?project=apache-beam-testing
    Sep 03, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-02_17_46_10-9110703423107782645
    Sep 03, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-02_17_46_10-9110703423107782645
    Sep 03, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-03T00:46:13.946Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:19.639Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.455Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.497Z: Expanding GroupByKey operations into optimizable parts.
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.534Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.620Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.655Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:20.703Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:21.085Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:21.157Z: Starting 5 workers in us-central1-a...
    Sep 03, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:46:26.342Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 03, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:47:06.735Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 03, 2021 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:47:32.227Z: Workers have started successfully.
    Sep 03, 2021 12:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:47:32.268Z: Workers have started successfully.
    Sep 03, 2021 12:48:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:48:00.519Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 03, 2021 12:48:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:48:00.639Z: Cleaning up.
    Sep 03, 2021 12:48:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:48:00.721Z: Stopping worker pool...
    Sep 03, 2021 12:50:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:50:19.625Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 03, 2021 12:50:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-03T00:50:19.672Z: Worker pool stopped.
    Sep 03, 2021 12:50:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-02_17_46_10-9110703423107782645 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2de0ee76-a6d9-4cb0-96bb-160cf71e7701 and timestamp: 2021-09-03T00:50:26.230000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.841

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 03, 2021 12:50:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 35.033 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 4s
152 actionable tasks: 109 executed, 43 from cache

Publishing build scan...
https://gradle.com/s/zyrh4jzvbum5i

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org